A NEW NONLINEAR FILTER
|
|
- Oswin Chapman
- 5 years ago
- Views:
Transcription
1 COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed where both the observation and signal process have non-linear dynamics with additive white Gaussian noise Using the reference probably framework a convolution Zakai equation is obtained which updates the unnormalized conditional density Our work obtains approximate solutions of this equation in terms of Gaussian sum when second order expansions are introduced for the non-linear terms Acknowledgements The authors thank NSERC for its continued support 1 Introduction The most successful filter has without doubt been the Kalman filter This considers noisy observations of a signal process where the dynamics are linear and the noise is additive and Gaussian Extensions of the Kalman filter to cover non-linear dynamics were obtained by taking first order Taylor expansions of the non-linear terms about the current mean The resulting filter is the so-called extended Kalman filter, or EKF A popular method in recent years has been the so-called particle filter approach However, this is only based on Monte-Carlo simulation In this paper the reference probability framework is used to obtain a discrete time version of the Zakai equation This looks like a convolution equation and it provides an update for the unnormalized conditional density of the state process given the observations If first order Taylor series approximations are used for the non-linear terms in the signal and observation processes, the convolution equation can be solved explicitly and the extended Kalman filter re-derived Taylor expansions of the nonlinear terms to second order are then considered and approximate solutions in terms of Gaussian sums obtained Detailed proofs and numerical work will appear in [] Dynamics Consider non-linear signal and observation processes x, y where the noise is additive and Gaussian Suppose the processes are defined on Ω, F, P where w = {w k, k = 0, 1,, }, v = {v k, k = 0, 1,, } are sequences of independent N0, I n, resp N0, I m, random variables Haskayne School of Business, University of Calgary, Calgary, Alberta, Canada, TN 1N4, relliott@ucalgaryca Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario, Canada, haykin@mcmasterca 03
2 04 ROBERT J ELLIOTT AND SIMON HAYKIN Suppose for each k = 0, 1,, are measurable functions A k : R n R n C k : R n R m We suppose the signal dynamics have the form: 1 x k+1 = A k x k + B k w k+1 and the observation dynamics have the form Observation y k = C k x k + D k v k To simplify notation we suppose the coefficients are time independent Further we suppose that B : R n R n and D : R m R m are symmetric and non-singular Measure Change We shall follow the methods of Elliott and Krishnamurthy [3] and show how the dynamics 1, can be modelled starting with a reference probability P Suppose on Ω, F, P we have two sequences of random variables x = {x k, k = 0, 1,, } y = {y k, k = 0, 1,, } We suppose that under P the x k are independent n-dimensional N0, I n random variables and the y k are independent N0, I m random variables For x R n, y R m write Here the prime ψx = π n/ exp x x φy = π n/ exp y y denotes transpose Define the σ-fields G = σ{x 0, x 1,, x k, y 0, y 1,, y k } Y = σ{y 0, y 1,,y k } Then G k represents the possible histories of x and y to time k and Y k represents the history of y to time k For any square matrix B write B for its determinant Write λ 0 = φ D 1 y 0 Cx 0 D φy 0
3 A NEW NONLINEAR FILTER 05 and for l 1 λ l = φ D 1 y l Cx l D φy l ψ B 1 x l Ax l 1 B ψx l For k 0 define Λ k = k λ l We can define a new probability measure P on Ω, V k G k by setting dp dp l=0 Gk = Λ k For l = 0, 1,, define v l := D 1 y l Cx l For l = 1,, define w l := B 1 x l Ax l 1 As in [3] we can then prove: Lemma 1 Under the measure P v = {v l, l = 0, 1,, } w = {w l, l = 1,, } are sequences of independent N0, I m and N0, I n random variables, respectively That is, under the measure P x l = Ax l 1 + Bw l y l = Cx l + Dv l However, P is a nicer measure under which to work 3 Recursive Densities The filtering problem is concerned with the estimation of x k, and its statistics, given the observations y 0, y 1,,y k, that is, given Y k The problem would be completely solved if we could determine the conditional density γ k x of x k given Y k That is, γ k xdx = Px k dx Y k = E[Ix k dx Y k ] Using a form of Bayes Theorem, see [1], for any measurable function g : R n R E[gx k Y k ] = E[Λ kgx k Y k ] E[Λ k Y k ] where the expectations on the right are taken under the measure P The numerator E[Λ k gx k Y k ] is an unnormalized conditional expectation of gx k given Y k The denominator is the special case of the numerator obtained by taking g = 1
4 06 ROBERT J ELLIOTT AND SIMON HAYKIN The map g E[Λ k gx k Y k ] is a continuous linear functional and so defines a measure We assume this measure has a density α k x so that E[Λ k gx k Y k ] = gxα k xdx R n and heuristically E[Λ k Ix k dx Y k ] = α k xdx Then E[gx k Y k ] = R gxα n k xdx α R n k xdx and α k x is an unnormalized conditional density of x k given Y k Modifying the proof in [3] we have: Theorem 31 α k satisfies the recursion: 31 α k x = φ D 1 y k Cx α k 1 zψ B 1 x Az dz B C φy k R n Proof See [3] Remark 3 This equation provides the recursion for the unnormalized conditional density of x k given Y k It is a discrete time version of the Zakai equation The problem is now to solve this equation As shown in the book [1] and the paper [3], when the dynamics are linear, Ax = Ax and Cx = Cx, for suitable matrices A and C, with linear Gaussian noise terms the integral in 31 can be evaluated and a Gaussian density obtained for α k x The extended Kalman filter is obtained below by linearizing A and C and solving 31 We then obtain approximate solutions when second and higher order terms are included in the approximations of Ax and Cx 4 Notation Recall our model is Signal Observation x k = Ax k 1 + Bw k R n y k = Cx k + Dv k R m where A and C can be non-linear functions: A : R n R n, C : R n R m Notation 41 The transpose of any matrix or vector M will be denoted by M, Write Ax = A 1 x,, A n x Cx = C 1 x,, C m x
5 A NEW NONLINEAR FILTER 07 If A resp C are twice differentiable we shall write A = A 1,, A n = A 1 x 1, A 1 x,, A n x 1, A 1 x n A n x n, C = C 1,, C m A will denote the matrix of Hessians A 1 A A = A n where Similarly, A k = A k x 1 A k x 1 x A k x n x 1 C = C 1 C m A k x 1 x n A k x n Approximations 4 For any µ R n we can consider the Taylor expansions of A and C to second order: 41 4 Az Aµ + Aµ z µ + 1 Aµ z µ Cx C Aµ + C Aµ x Aµ + 1 C Aµ x Aµ Here Aµ z µ R n and Aµ z µ denotes the vector z µ A 1 µz µ,, z µ A n µ z µ From the Gaussian densities involved in the recursion for α k we shall be interested in scalar quantities of the form: 1 B x Aµ Aµ z µ Aµz µ
6 08 ROBERT J ELLIOTT AND SIMON HAYKIN Write ξ = ξ 1, ξ,,ξ n for the vector B x Aµ Aµ z µ Then 1 B x Aµ Aµ z µ Aµ z µ = 1 ξ Aµ z µ z µ A 1 µz µ = 1 ξ 1,, ξ n z µ A n µz µ = z µ Hµz µ where Hµ is the symmetric matrix defined symbolically by A 1 µ Hµ = 1 ξ 43 A n µ = 1 ξ1 A 1 µ + ξ A µ + ξ n A n µ Similarly, for C C Aµ x Aµ will denote the vector x Aµ C 1 Aµ x Aµ,, x Aµ C m Aµ x Aµ form In the exponential involving C terms we shall consider scalar quantities of the This can be written 1 y C Aµ D C Aµ x Aµ x Aµ Z 1 µ x Aµ where Z 1 µ is the symmetric matrix 44 C 1 Aµ 1 ζ = 1 ζ 1 C 1 Aµ +ζ C Aµ + +ζm C m Aµ C m Aµ with ζ = D y C Aµ 5 The EKF We shall take the first two terms in the Taylor expansions of Ax and Cx and show how equation 31 can be solved to re-derive the EKF Extended Kalman Filter Of course, if the dynamics are linear the calculations below show how the Kalman filter can be derived from 31
7 A NEW NONLINEAR FILTER 09 Linear Approximations Signal The signal has dynamics x k = Ax k 1 + Bw k R n If µ k 1 is the conditional mean determined at time k 1 we consider the first two terms in the Taylor expansion of Ax k 1 about µ k 1 and write 51 x k Aµ k 1 + Aµ k 1 x k 1 µ k 1 + Bw k Similarly for the Observation y k = Cx k + Dv k R m we take the first two terms of the Taylor expansion about Aµ k 1 and write 5 y k C Aµ k 1 + C Aµ k 1 x k Aµ k 1 + Dv k R m We are supposing that the conditional density of x k 1 given Y k 1 is N µ k 1, Σ k 1, that is the normalized conditional density of xk 1 given Y k 1 is 53 Σk 1 1/ ψ Σ 1 k 1 x µ k 1 Therefore, α k 1 z ψ Σ 1 k 1 z µ k 1 Let us note that x k k 1 : = E[x k Y k 1 ] 54 = E[Ax k 1 + Bw k ]Y k 1 ] = E[Ax k 1 Y k 1 ] E[Aµ k 1 + Aµ k 1 x k 1 µ k 1 Y k 1 ] = Aµ k 1 as µ k 1 = E[x k 1 Y k 1 ] In the paper of Ito and Xiong [5] a more accurate approximation is given using quadrature for E[Ax k 1 Y k 1 ] This could be used rather than Aµ k 1 Also write 55 Σ k k 1 = E[x k x k k 1 x k x k k 1 Y k 1 ] E [ Ax k 1 + Bw k Aµ k 1 Ax k 1 + Bw k Aµ k 1 Yk 1 ] E [ Aµ k 1 x k 1 µ k 1 + Bw k Aµ k 1 x k 1 µ k 1 + Bw k Yk 1 ] = Aµ k 1 Σ k 1 Aµ k 1 + B Recall the matrix inversion Lemma MIL: Lemma 51 For suitable matrices B + AΣA 1 = B 1 B 1 AΣ 1 + A B 1 A 1 A B 1
8 10 ROBERT J ELLIOTT AND SIMON HAYKIN Corollary 5 Σ 1 k k 1 = B B Aµ k 1 Σ 1 k 1 + Aµ k 1 B Aµ k 1 1 Aµk 1 B Linear algebra and the MIL then enable us to obtain the following result Theorem 53 EKF Suppose x k 1 k 1 = E[x k 1 Y k 1 ] is Nµ k 1 Σ k 1 Then, approximately, x k k is Nµ k, Σ k where 56 Σ k = Σ k k 1 Σ k k 1 C Aµ k 1 D + C Aµ k 1 Σ k k 1 C Aµ k 1 1 C Aµk 1 Σ k k 1 and 57 µ k = Aµ k 1 + Σ k k 1 C Aµ k 1 D + C Aµ k 1 Σ k k 1 C Aµ k 1 1 y k C Aµ k 1 Proof See [] 6 A New Nonlinear Filter We extend the EKF by including second order terms in the approximations for Ax and Cx This has two consequences: 1 we can no longer explicitly evaluate the integral I We approximate this using Gauss-Hermite quadrature rules after some algebra when the integral is multiplied by φ D 1 y k Cx we would like to obtain Gaussian densities By modifying the algebra of Section 5 we are able to do this Expansions 61 We recall the second order expansions 41 and 4 61 Az Aµ + Aµ z µ + 1 Aµ z µ 6 Cx C Aµ + C Aµ x Aµ + 1 C Aµ x Aµ Theorem 6 Suppose α k 1 z is given by a weighted sum of Gaussian densities: α k 1 z = Nk 1 λ k 1,i exp 1 Σ x µk 1,i k 1,i x µk 1,i
9 A NEW NONLINEAR FILTER 11 Then α k x Nk 1 mk,i j=1 λ k,i,j exp 1 x µk,i,j Σ 1 k,i x µk,i,j where and 618 with Σ k,i,j = Γ k,i,j Γ k,i,j C Aµ k 1,i D + C Aµ k 1 Γ k,i,j C Aµ k 1,i 1 C Aµk 1,i Γ k,i µ k,i,j = Aµ k 1 + Γ k,i,j C Aµ k 1,i D + C Aµ k 1,i Γ k,i,j C Aµ k 1,i 1 y k C Aµ k 1,i Σ k,i,j γ k,i,j The values of Γ k,i,j, g k,i,j and λ k,i,j are given, respectively, by 619, 61 and Proof Suppose that α 0 z is a sum of Gaussian densities: α 0 z = N0 λ oi exp 1 z µ oi Σ 1 oi z µ o,i λ oi = ρ oi π n/ Σ oi 1/ > 0 We can, and shall, suppose α 0 z is a normalized density, that is R n α 0 zdz = 1, so N0 ρ oi = 1 Later unnormalized conditional densities are obtained from the recursion 31: α k x = φ D 1 y k Cx α k 1 zψ B 1 x Az dz B D φy k R n Suppose α k 1 z is approximated as a sum of Gaussian densities: α k 1 z Nk 1 λ k 1,i exp 1 z µ k 1,i Σ 1 k 1,i z µ k 1,i
10 1 ROBERT J ELLIOTT AND SIMON HAYKIN where again λ k 1,i = ρ k 1,i π n/ Σ k 1,i 1/ > 0 and from the recursion α k x φ D 1 y k Cx B D φy k exp 1 R n Nk 1 λ k 1,i Nk 1 [ z µk 1,i Σ 1 k 1,i z µ k 1,i + x Az B x Az ] dz ρ k 1,i = 1 Then Consider one integral term in the sum To simplify the notation we shall drop the suffices for the moment Using the second order Taylor expansion for Az the integral term is approximately: I k 1,i = exp 1 [ z µ Σ 1 z µ R n 64 Here, as in Section 5, and A is the matrix of Hessians + x A Az µ 1 Az µ B x A Az µ 1 Az µ ] dz A = Aµ k 1,i R n A = Aµ k 1,i R n n Dropping the z µ 4 term this is approximately exp 1 [ z µ Σ 1 z µ R n + x A Az µ B x A Az µ ] Gx, zdz where 1 B Gx, z = exp x A Az µ Az µ 1 = exp z µ Hx, µz µ in the notation of 43, where A 1 Hx, µ = 1 B x A Az µ A n Completing the square we have I k 1,i Kx exp 1 R z σ kiδ ki σ 1 ki z σ kiδ ki G k,i x, zdz n
11 A NEW NONLINEAR FILTER 13 where 65 σ 1 ki = Σ 1 k 1,i + Aµ k 1,i B Aµ k 1,i and 68 δ k,i = Σ 1 k 1,µ µ k 1,i + Aµ k 1,i B x Aµ k 1,i + Aµ k 1,i µ k 1,i 1 G k,i x, z = exp x Aµk 1,i Aµ k 1,i z µ k 1,i B Aµ k 1,i z µ k 1,i 1 Kx = exp δ ki σ ki δ ki µ k 1,iΣ 1 k 1,i µ k 1,i + x Aµ k 1,i + Aµ k 1,i µ k 1,i B x Aµ k 1,i + Aµ k 1,i µ k 1,i The problem is now to evaluate the integral exp 1 R z σ kiδ ki σ 1 ki z σ kiδ ki G ki x, zdz n As in Ito and Xiong, [5], one way is to use Gauss-Hermite quadrature of which the Julier-Uhlmann unscented filter is a special case That is, we assume and change coordinates writing The integral is then 69 S ki exp R n σ ki = S ki S ki z = S kit + σ ki δ ki, t R n t t Gx, S kit + σ ki δ ki dt Using the Gauss-Hermite formula this is approximately mk,i j=1 mk,i = where j=1 ω k,i,j Gx, S ki t j + σ ki δ ki x ω k,i,j exp Aµk 1,i Aµ k 1,i S kit j + σ ki δ ki µ k 1,i γk,i,j 610 A 1 µ k 1,i γ k,i,j = 1 B S kit j + σ ki δ ki µ k 1,i S ki t j + σ ki δ ki µ k 1,i A n µ k 1,i
12 14 ROBERT J ELLIOTT AND SIMON HAYKIN The order of approximation mk, i can be chosen Weights ω k,i,j are determined as in Golub [4] In fact if J is the symmetric tridiagonal matrix with a zero diagonal and J i,i+1 = i/, 1 i mk, i 1 then the t j are the eigenvalues of J and the ω k,i,j are given by v i 1, where v i 1 is the first element of the i th normalized eigenvector of J and As our integrand in 69 is not a N0, I n density in our case ω k,i,j = π n/ S ki ω k,i,j Write X i = x Aµ k 1,i and recall from 66 that Also write δ k,i = Σ 1 k 1,i µ k 1,i + Aµ k 1,i B X i + Aµ k 1,i µ k 1,i M kij = S ki t j + σ ki Σ 1 k 1,i µ k 1,i + Aµ k 1,i B Aµ k 1,i µ k 1,i µk 1,i Then F ki = σ ki Aµ k 1,i B 611 I k 1,i mk,i j=1 [ 1 ω k,i,j K ki exp Xi Aµ k 1,i M kij + F ki X i ] M kij + F ki X i Aµ k 1,i M kij + F ki X i Here again Aµ k 1,i denotes the matrix of Hessians Consider one term in this sum Retaining only terms of order x Aµ k 1,i = X i and dropping suffices this is: where: [ 1 ] ωkxexp X H 1 X + g X + κ 61 H 1 = H 1 kij = I AF B M AF B AM AF g = g kij = 1 I AF B M AM F AM B A M and κ = κ kij = M AB M AM
13 A NEW NONLINEAR FILTER 15 Therefore, I k 1,i mk,i j=1 ω kij K ki x exp [ 1 X i H 1 kij X i + g kij X i + κ kij ] where X i = x Aµ k 1,i Substituting in 63 α k x φ D 1 y k Cx B D φy k Nk 1 mk,i j=1 λ k 1,i ω kij K ki x Write exp [ 1 X ih 1 kij X i + g kijx i + κ kij ] 613 ρ kij = λ k 1,iω kij expκ kij B D φy k so α k x Nk 1 mk,i j=1 ρ kij φ D 1 y k Cx K ki xexp [ 1 X i H 1 kij X i + g kij X i] Consider one term only in the sum and replace Cx by its second order Taylor expansion about Aµ k 1,i Again, drop suffices to simplify the notation Then L kij x = Lx : = ρφ D 1 y k Cx Kx [ 1 exp x A H 1 x A + g x A] [ 1 ] ρkx exp X ih 1 X i + g X i [ exp 1 y C CXi 1 X i CX i D y C CX i 1 X i CX i Here again C = C Aµ k 1,i R m and C = C Aµ k 1,i R m n C 1 Aµk 1,i C = C m Aµk 1,i A = Aµ k 1,i R n X i = x Aµ k 1,i = x A
14 16 ROBERT J ELLIOTT AND SIMON HAYKIN Keeping only terms of order x A = X i this gives Lx ρkx exp 1 y C CXi D y C CX i 1 [ 1 ] exp y C D X i i CX exp X i H 1 X i + g X i As in 44 we can write as 1 exp y C D X i CX i 1 exp X i R 1 X i where R 1 = y k C Aµ k 1,i D C Aµ k 1,i Write Then dropping suffices and Z 1 := R 1 + H 1 [ 1 ] Lx ρkxexp X iz 1 X i + g X i exp 1 D y C CXi y C CX i α k x Nk 1 mi,j j=1 ρ k,i,j L k,i,j x Recalling X = X i = x Aµ k 1,i and substituting for Kx from 68 we have 1 [ L kij x = Lx ρ exp Σ 1 µ + AB X + Aµ σ k Σ 1 µ + AB X + Aµ µ Σ 1 µ + X + Aµ B X + Aµ ] 1 exp X Z 1 X + g X exp 1 D y C CX y C CX Collecting terms in X = X i = x Aµ k 1,i this is: = G exp 1 X Σ 1 X X
15 A NEW NONLINEAR FILTER 17 where 614 Σ 1 = Σ 1 kij = C Aµ k 1,i D C Aµ k 1,i + B B Aµ k 1,i σ k,i Aµ k 1,i B Z 1 kij and = kij 615 Further 616 = C Aµ k 1,i D y k C Aµ k 1,i B Aµ k 1,i µ k 1,i + g k,i,j + B Aµ k 1,i σ k,i Σ 1 k 1,i µ k 1,i + Aµ k 1,i B Aµ k 1,i µ k 1,i 1 G = G kij = ρ exp Σ 1 µ + A B A µ σ Σ 1 µ + AB A µ µ Σ 1 µ µ A B Aµ 1 y C D y C Completing the square we have 617 L k,i,j x = Lx λ k,i,j exp where 618 λ k,i,j = G kij exp Now as in 55 [ 1 ] X i Σ Σ 1 X i Σ 1 Σ Σ k k 1,i := Aµ k 1,i Σ k 1,i Aµ k 1,i + B is an approximate conditional variance of x k, given Y k 1, if Then, as in Corollary 5 so, from 614 x k 1 k 1 N µ k 1,i, Σ k 1,i Σ 1 k k 1,i = B B Aµ k 1,i σ k,i Aµ k 1,i B Σ 1 = Σ 1 k,i,j = C D C + Σ 1 k k 1,i Z 1 Write Γ 1 = Γ 1 k,i,j := Σ 1 k k 1,i Z 1 k,i,j so, using the MIL, Lemma Γ = Σ k k 1,i Σ k k 1,i Σk k 1,i Z k,i,j 1Σk k 1,i
16 18 ROBERT J ELLIOTT AND SIMON HAYKIN Also, as again using Lemma 51 Σ 1 = Σ 1 k,i,j = C D C + Γ 1 60 Σ = Σ k,i,j = Γ Γ C D + CΓ C 1 CΓ this is Finally we evaluate µ k,i,j = Σ k,i,j k,i,j Dropping suffices, from 614 and 615 this is [ Γ Γ C D + CΓ C 1 CΓ ] C D y C + Σ [ B Aσ Σ 1 µ + A B Aµ ] + Σg ΣB Aµ = [ Γ C Γ C D + CΓ C 1 CΓ C + D D ] D y C + Σ [ B A σ Σ 1 + A B A µ ] + Σg ΣB Aµ Recalling from 65 that Therefore, in terms of the variable the mean is σ = Σ 1 + A B A 1 = Γ C D + CΓ C 1 y C + Σg X = X i = x Aµ k 1,i Σ k,i,j k,i,j = Γ C D + CΓ C 1 y C + Σg so in terms of the original variable x the mean is = A + Γ C D + CΓ C 1 y C Σg That is, now writing in suffices, we have obtained a Gaussian density in the sum which determines α k x which has a variance Σ k,i,j = Γ k,i,j Γ k,i,j C Aµ k 1,i D + C Aµ k 1,i Γ k,i,j C Aµ k 1,i 1 C Aµk 1,i Γk,i,j,
17 A NEW NONLINEAR FILTER 19 where Γ 1 k,i,j = Σ 1 k k 1,i Z 1 k,i,j, and a mean µ k,i,j = Aµ k 1,i + Γ k,i,j C Aµ k 1,i D + C Aµ k 1,i Γ k,i,j C Aµ k 1,i 1 y k C Aµ k 1,i Σ k,i,j g k,i,j Here g k,i,j is given by 61 Also, the new weights λ k,i,j are given by 618 Therefore, α k x Nk 1 mk,i j=1 λ k,i,j exp 1 x µ k,i,j Σ 1 k,i,j x µ k,i,j Pruning can be effected by selecting only so many of the terms based on the size of the λ k,i,j 7 Conclusion A discrete time version of the Zakai equation has been obtained Using first order linearizations the equation can be solved and the extended Kalman filter derived Second order Taylor expansions of the non-linear terms were then considered and approximate solutions of the Zakai equation derived in terms of Gaussian sums Formulae for updating the means, variances and weights of the terms in the sums are given Detailed proofs will appear in [] REFERENCES [1] R J Elliott, L Aggoun, and J B Moore, Hidden Markov Models: Estimation and Control, Springer-Verlag, Berlin, Heidelberg, New York, Second edition 006 [] R J Elliott and S Haykin, An extended EKF, University of Calgary preprint 006, to appear [3] R J Elliott and V Krishnamurthy, New finite dimensional filters for parameter estimation of discrete time linear Gaussian models IEEE Trans Automatic Control, , pp [4] G H Golub, Some modified matrix eigenvalue problems, SIAM Review, , pp [5] K Ito and K Xiong,Gaussian filters for non-linear filtering problems, IEEE Trans Automatic Control, 45000, pp [6] H Kushner and A S Budhiraja, A nonlinear filtering algorithm based on an approximation of the conditional distribution, IEEE Trans Automatic Control, 45000, pp
18 0 ROBERT J ELLIOTT AND SIMON HAYKIN
HIDDEN MARKOV CHANGE POINT ESTIMATION
Communications on Stochastic Analysis Vol. 9, No. 3 (2015 367-374 Serials Publications www.serialspublications.com HIDDEN MARKOV CHANGE POINT ESTIMATION ROBERT J. ELLIOTT* AND SEBASTIAN ELLIOTT Abstract.
More informationAn application of hidden Markov models to asset allocation problems
Finance Stochast. 1, 229 238 (1997) c Springer-Verlag 1997 An application of hidden Markov models to asset allocation problems Robert J. Elliott 1, John van der Hoek 2 1 Department of Mathematical Sciences,
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationLecture 6: Bayesian Inference in SDE Models
Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs
More informationStatistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1
Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters Lecturer: Drew Bagnell Scribe:Greydon Foil 1 1 Gauss Markov Model Consider X 1, X 2,...X t, X t+1 to be
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationConvergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics
Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics Meng Xu Department of Mathematics University of Wyoming February 20, 2010 Outline 1 Nonlinear Filtering Stochastic Vortex
More informationLecture 7: Optimal Smoothing
Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother
More informationNonlinear Filtering Revisited: a Spectral Approach, II
Nonlinear Filtering Revisited: a Spectral Approach, II Sergey V. Lototsky Center for Applied Mathematical Sciences University of Southern California Los Angeles, CA 90089-3 sergey@cams-00.usc.edu Remijigus
More informationExtension of the Sparse Grid Quadrature Filter
Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang
More informationKalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q
Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationGaussian Filters for Nonlinear Filtering Problems
910 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Gaussian Filters for Nonlinear Filtering Problems Kazufumi Ito Kaiqi Xiong Abstract In this paper we develop analyze real-time accurate
More informationIntroduction to Unscented Kalman Filter
Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics
More informationQuadrature Filters for Maneuvering Target Tracking
Quadrature Filters for Maneuvering Target Tracking Abhinoy Kumar Singh Department of Electrical Engineering, Indian Institute of Technology Patna, Bihar 813, India, Email: abhinoy@iitpacin Shovan Bhaumik
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationNON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES
2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen
More informationA New Nonlinear Filtering Method for Ballistic Target Tracking
th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering
More informationOn Continuous-Discrete Cubature Kalman Filtering
On Continuous-Discrete Cubature Kalman Filtering Simo Särkkä Arno Solin Aalto University, P.O. Box 12200. FI-00076 AALTO, Finland. (Tel: +358 50 512 4393; e-mail: simo.sarkka@aalto.fi) Aalto University,
More informationA new unscented Kalman filter with higher order moment-matching
A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This
More informationMATH4210 Financial Mathematics ( ) Tutorial 7
MATH40 Financial Mathematics (05-06) Tutorial 7 Review of some basic Probability: The triple (Ω, F, P) is called a probability space, where Ω denotes the sample space and F is the set of event (σ algebra
More informationState Estimation using Moving Horizon Estimation and Particle Filtering
State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /
More informationEKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory
More information5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors
EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we
More informationA Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization
A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline
More informationEKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory
More informationS. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA
RECURSIVE MULTIPLE WIENER INTEGRAL EXPANSION FOR NONLINEAR FILTERING OF DIFFUSION PROCESSES Published in: J. A. Goldstein, N. E. Gretsky, and J. J. Uhl (editors), Stochastic Processes and Functional Analysis,
More informationLecture 4: Extended Kalman filter and Statistically Linearized Filter
Lecture 4: Extended Kalman filter and Statistically Linearized Filter Department of Biomedical Engineering and Computational Science Aalto University February 17, 2011 Contents 1 Overview of EKF 2 Linear
More informationwhere r n = dn+1 x(t)
Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationRisk-Sensitive Filtering and Smoothing via Reference Probability Methods
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 11, NOVEMBER 1997 1587 [5] M. A. Dahleh and J. B. Pearson, Minimization of a regulated response to a fixed input, IEEE Trans. Automat. Contr., vol.
More informationDual Estimation and the Unscented Transformation
Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology
More informationRESEARCH ARTICLE. Online quantization in nonlinear filtering
Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationENGR352 Problem Set 02
engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).
More informationLecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations
Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Simo Särkkä Aalto University, Finland November 18, 2014 Simo Särkkä (Aalto) Lecture 4: Numerical Solution of SDEs November
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/
More informationNonlinear State Estimation! Particle, Sigma-Points Filters!
Nonlinear State Estimation! Particle, Sigma-Points Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Particle filter!! Sigma-Points Unscented Kalman ) filter!!
More informationPrediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother
Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications
More informationPreviously Monte Carlo Integration
Previously Simulation, sampling Monte Carlo Simulations Inverse cdf method Rejection sampling Today: sampling cont., Bayesian inference via sampling Eigenvalues and Eigenvectors Markov processes, PageRank
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationEfficient Nonlinear Bayesian Estimation based on Fourier Densities
Efficient Nonlinear Bayesian Estimation based on Fourier Densities Dietrich Brunn, Felix Sawo, and Uwe D. Hanebeck Abstract Efficiently implementing nonlinear Bayesian estimators is still not a fully solved
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More informationLinear and nonlinear filtering for state space models. Master course notes. UdelaR 2010
Linear and nonlinear filtering for state space models Master course notes UdelaR 2010 Thierry CHONAVEL thierry.chonavel@telecom-bretagne.eu December 2010 Contents 1 Introduction 3 2 Kalman filter and non
More informationLECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).
CE 025 - Lecture 6 LECTURE 6 GAUSS QUADRATURE In general for ewton-cotes (equispaced interpolation points/ data points/ integration points/ nodes). x E x S fx dx hw' o f o + w' f + + w' f + E 84 f 0 f
More informationComputational Aspects of Continuous-Discrete Extended Kalman-Filtering
Computational Aspects of Continuous-Discrete Extended Kalman-Filtering Thomas Mazzoni 4-3-7 Abstract This paper elaborates how the time update of the continuous-discrete extended Kalman-Filter EKF can
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationEvent-based State Estimation of Linear Dynamical Systems: Communication Rate Analysis
2014 American Control Conference Estimation of Linear Dynamical Systems: Dawei Shi, Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada Department of Electronic
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More informationRecursive Nonlinear Filter for a Continuous-Discrete Time Model: Separation of Parameters and Observations
Recursive Nonlinear Filter for a Continuous-Discrete Time Model: Separation of Parameters and Observations Sergey V. Lototsky Boris L. Rozovskii Published in IEEE Transactions on Automatic Control, Vol.
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationEE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence
1 EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence Jean Walrand I. SUMMARY Here are the key ideas and results of this important topic. Section II reviews Kalman Filter. A system is observable
More informationAPM526 CLASSROOM NOTES PART IB. APM526, Spring 2018 Last update: MAR 12
APM526 CLASSROOM NOTES PART IB APM526, Spring 2018 Last update: MAR 12 1 PARTICLE METHODS 1. Particle based methods: Large systems of ODEs for individual particles. 2. Partial differential equations: Equations
More informationJoint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:
Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,
More informationExpectation Propagation in Dynamical Systems
Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex
More informationDynamic models 1 Kalman filters, linearization,
Koller & Friedman: Chapter 16 Jordan: Chapters 13, 15 Uri Lerner s Thesis: Chapters 3,9 Dynamic models 1 Kalman filters, linearization, Switching KFs, Assumed density filters Probabilistic Graphical Models
More informationThe Unscented Particle Filter
The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian
More informationMarkov Chain Monte Carlo Inference. Siamak Ravanbakhsh Winter 2018
Graphical Models Markov Chain Monte Carlo Inference Siamak Ravanbakhsh Winter 2018 Learning objectives Markov chains the idea behind Markov Chain Monte Carlo (MCMC) two important examples: Gibbs sampling
More information1 Kalman Filter Introduction
1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation
More informationLecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations
Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Simo Särkkä Aalto University Tampere University of Technology Lappeenranta University of Technology Finland November
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationOn a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model
On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic
More informationEM-algorithm for Training of State-space Models with Application to Time Series Prediction
EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Homework 3 out tonight Start early!! Announcements Project milestones due today Please email to TAs 2 Parameter learning
More informationHomework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9
Bachelor in Statistics and Business Universidad Carlos III de Madrid Mathematical Methods II María Barbero Liñán Homework sheet 4: EIGENVALUES AND EIGENVECTORS DIAGONALIZATION (with solutions) Year - Is
More informationA Matrix Theoretic Derivation of the Kalman Filter
A Matrix Theoretic Derivation of the Kalman Filter 4 September 2008 Abstract This paper presents a matrix-theoretic derivation of the Kalman filter that is accessible to students with a strong grounding
More informationMachine Learning 4771
Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object
More informationHigher-Degree Stochastic Integration Filtering
Higher-Degree Stochastic Integration Filtering Syed Safwan Khalid, Naveed Ur Rehman, and Shafayat Abrar Abstract arxiv:1608.00337v1 cs.sy 1 Aug 016 We obtain a class of higher-degree stochastic integration
More informationSolutions Preliminary Examination in Numerical Analysis January, 2017
Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which
More informationMinimum Polynomials of Linear Transformations
Minimum Polynomials of Linear Transformations Spencer De Chenne University of Puget Sound 30 April 2014 Table of Contents Polynomial Basics Endomorphisms Minimum Polynomial Building Linear Transformations
More informationLecture 1a: Basic Concepts and Recaps
Lecture 1a: Basic Concepts and Recaps Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk Advanced
More informationEcon 508B: Lecture 5
Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline
More informationConditions for successful data assimilation
Conditions for successful data assimilation Matthias Morzfeld *,**, Alexandre J. Chorin *,**, Peter Bickel # * Department of Mathematics University of California, Berkeley ** Lawrence Berkeley National
More informationSmoothers: Types and Benchmarks
Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationExpectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,
More informationRandom Matrix Eigenvalue Problems in Probabilistic Structural Mechanics
Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html
More informationDifferential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm
Differential Equations 228 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 25 at 2:5pm Instructions: This in-class exam is 5 minutes. No calculators, notes, tables or books. No answer check is
More informationAN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.
AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER Qi Cheng and Pascal Bondon CNRS UMR 8506, Université Paris XI, France. August 27, 2011 Abstract We present a modified bootstrap filter to draw
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.
More informationVolume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models
Volume 30, Issue 3 A note on Kalman filter approach to solution of rational expectations models Marco Maria Sorge BGSE, University of Bonn Abstract In this note, a class of nonlinear dynamic models under
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationLessons in Estimation Theory for Signal Processing, Communications, and Control
Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL
More informationConvergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationCS281A/Stat241A Lecture 17
CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian
More information19 : Slice Sampling and HMC
10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often
More informationOptimal Filtering for Linear States over Polynomial Observations
Optimal filtering for linear states over polynomial observations 261 14 2 Optimal Filtering for Linear States over Polynomial Observations Joel Perez 1, Jose P. Perez 1 and Rogelio Soto 2 1 Department
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile
More information8 - Continuous random vectors
8-1 Continuous random vectors S. Lall, Stanford 2011.01.25.01 8 - Continuous random vectors Mean-square deviation Mean-variance decomposition Gaussian random vectors The Gamma function The χ 2 distribution
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationPade Approximations and the Transcendence
Pade Approximations and the Transcendence of π Ernie Croot March 9, 27 1 Introduction Lindemann proved the following theorem, which implies that π is transcendental: Theorem 1 Suppose that α 1,..., α k
More information6.4 Kalman Filter Equations
6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run
More information