ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

Size: px
Start display at page:

Download "ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter"

Transcription

1 ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

2 The Kalman Water Filter (from IEEE Control Systems Magazine Feb 2006: Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

3 Dynamic MMSE State Estimation (Filtering) U[n] F[n],G[n] H[n] X[n] X[n + 1] Z[n] unit delay V [n] Y [n] For now, we will focus on the filtering problem, i.e. estimating X[l] given the observations Y [0],..., Y [l]. We know that the MMSE state estimator is n o ˆX[l l] := E X[l] Y0 l where Y l 0 := {Y [0],..., Y [l]}. The goal here is to leverage the restrictions we placed on the dynamical model in Lecture 10 to derive a computationally efficient MMSE estimator. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

4 MMSE State Prediction ˆX[0] given no observations Recall that we have assumed a Gaussian distributed initial state X[0] N(m[0],Σ[0]) where m[0] and Σ[0] are known. If you have no observations yet, what is the best estimator of the initial state? Prior to receiving the first observation, the MMSE estimate (prediction) of the initial state X[0] is simply ˆX[0 1] := E {X[0] no observations} = m[0]. The error covariance matrix of this MMSE estimator (predictor) is then { ( ) ( ) } Σ[0 1] := E ˆX[0 1] X[0] ˆX[0 1] X[0] = E {(m[0] X[0]) (m[0] X[0]) } = Σ[0] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

5 Review: Conditional Mean of Jointly Gaussian Vectors Given Y = HX + V with X N(m,Σ), V N(0,R), and X and V independent. We would like to compute the conditional mean E[X Y ]. We can use the conditional pdf of jointly Gaussian random vectors to write E[X Y = y] = E[X] + cov(x,y )[cov(y,y )] 1 (y E[Y ]) It is pretty easy to show that E[X] = m, E[Y ] = Hm, cov(x,y ) = ΣH, and cov(y,y ) = HΣH + R. Hence, E[X Y = y] = m + ΣH [ HΣH + R] 1 (y Hm) Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

6 MMSE State Estimate ˆX[0] given Y [0] (part 1) At time n = 0, we receive the observation Y [0] = H[0]X[0] + V [0] where X[0] N(m[0],Σ[0]), V [0] N(0,R[0]), and H[0] is known. Note that, under the restrictions in Lecture 10, this is just the standard linear Gaussian model. We use our prior result to find the MMSE estimate of X[0] given Y [0]... ˆX[0 0] := E {X[0] Y [0]} = m[0] + Σ[0]H [0] = ˆX[0 1] + Σ[0 1]H [0] H[0]Σ[0]H [0] + R[0] 1 (Y [0] H[0]m[0]) H[0]Σ[0 1]H [0] + R[0] 1 (Y [0] H[0] ˆX[0 1]) Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

7 MMSE State Estimate ˆX[0] given Y [0] (part 2) If we define we can write Remarks: K[0] := cov(x[0], Y [0])[cov(Y [0], Y [0])] 1 = Σ[0 1]H [0] ( H[0]Σ[0 1]H [0] + R[0] ) 1 ˆX[0 0] = ˆX[0 1] + K[0] ( Y [0] H[0] ˆX[0 ) 1] The term Ỹ [0 1] := Y [0] H[0] ˆX[0 1] is sometimes called the innovation. It is the error between the MMSE prediction of Y [0] and the actual observation Y [0]. Substituting for Y [0], note that the innovation can be written as ( Ỹ [0 1] = H[0]X[0] + V [0] H[0] ˆX[0 ) 1] ( = H[0] X[0] ˆX[0 ) 1] + V [0] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

8 Kalman Gain The matrix ECE531 Lecture 11: The Kalman Filter K[0] := cov(x[0], Y [0])[cov(Y [0], Y [0])] 1 is sometimes called the Kalman Gain. Both covariances are unconditional here. Rewriting first covariance term in the Kalman Gain as { ( )( ) } cov(x[0], Y [0]) = E X[0] ˆX[0 1] Y [0] H[0] ˆX[0 1] and the second term as j cov(y [0], Y [0]) = E Y [0] H[0] ˆX[0 1] Y [0] H[0] ˆX[0 ff 1] we can see that the Kalman Gain (in a simplistic scalar sense) is proportional to the covariance between the state prediction error and the innovation inversely proportional to the innovation variance Does this make sense in the context of our state estimation equation? ˆX[0 0] = ˆX[0 1] + K[0] Y [0] H[0] ˆX[0 1] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

9 Additional Interpretation Note that K[0] is only a function of the error covariance matrix of the initial state prediction Σ[0 1], the known state update matrix H[0], and the known measurement noise covariance R[0]. Given K[0] and H[0], the state estimate ˆX[0 0] is only a function of the previous state prediction ˆX[0 1] and the current observation Y [0]. What can we say about Σ[0 0], the error covariance matrix of the state estimate ˆX[0 0] (conditioned on the observation Y [0])? Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

10 MMSE State Estimate ˆX[0] given Y [0] The error covariance matrix of the MMSE estimator ˆX[0 0] can be computed as { ( ) ( ) } Σ[0 0] := E ˆX[0 0] X[0] ˆX[0 0] X[0] Y [0] = cov {X[0] Y [0]} We can use the standard result for jointly Gaussian random vectors to write ( 1 Σ[0 0] = Σ[0] Σ[0]H [0] H[0]Σ[0]H [0] + R[0]) H[0]Σ[0] = Σ[0 1] K[0]H[0]Σ[0 1] where we used our definitions for Σ[0 1] and K[0] in the last equality. Note that, given K[0] and H[0], the error covariance matrix of the MMSE state estimator after the observation Y [0] is only a function of the error covariance matrix of the prediction Σ[0 1]. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

11 Remarks ECE531 Lecture 11: The Kalman Filter What we have done so far: 1. Predicted the first state with no observations: ˆX[0 1]. 2. Computed the error covariance matrix of this prediction: Σ[0 1]. 3. Received the observation Y [0]. 4. Estimated the first state given the observation: ˆX[0 0]. 5. Computed the error covariance matrix of this estimate: Σ[0 0]. Interesting observations: The estimate ˆX[0 0] is expressed in terms of the prediction ˆX[0 1] and the observation Y [0]. The estimate error covariance matrix Σ[0 0] is expressed in terms of the prediction error covariance matrix Σ[0 1]. The goal here is to develop a general recursion. Our first step will be to see if the prediction for the next state (and its ECM) can be expressed in terms of the estimate (and its ECM) of the previous state. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

12 State Update from X[l] to X[n + 1] From our linear dynamical model, we have X[n + 1] = F[n]X[n] + G[n]U[n] = F[n](F[n 1]X[n 1] + G[n 1]U[n 1]) + G[n]U[n] = etc. For n + 1 > l, we can repeat this process to write { n l } { n n j 1 } X[n + 1] = F[n k] X[l] + F[n k] G[j]U[j] where k=0 } {{ } Fn l F t n := j=l k=0 { F[n]F[n 1] F[t] I } {{ } Fn j+1 t n t > n Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

13 General Expression for MMSE State Prediction For n + 1 > l, we can use our prior result to write ˆX[n + 1 l] := E { X[n + 1] Y0 l } n model = E Fl nx[l] + Fn j+1 G[j]U[j] Y0 l linearity = F l ne { X[l] Y l 0 = F l n ˆX[l l] + irrelevance = F l n ˆX[l l] + = F l n ˆX[l l] j=l } + n n j=l n j=l j=l Fn j+1 G[j]E { U[j] Y0 l } Fn j+1 G[j]E { U[j] Y0 l } Fn j+1 G[j]E {U[j]} Note that the one-step predictor can be expressed as ˆX[l + 1 l] = F[l] ˆX[l l]. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

14 Conditional ECM of MMSE State Predictor n o Σ[n + 1 l] := cov X[n + 1] Y0 l 8 < nx model = cov : Fl nx[l] + j=l n o indep = cov FnX[l] l Y0 l n o irrelevance = cov FnX[l] l Y0 l F j+1 n o = Fncov l X[l] Y0 l (Fn) l + = F l nσ[l l](f l n) + 9 = n G[j]U[j] Y0 l ; 8 9 < nx = + cov Fn j+1 G[j]U[j] Y0 l : ; j=l 8 9 < nx = + cov Fn j+1 G[j]U[j] : ; nx j=l j=l nx j=l Fn j+1 G[j]Q[j] Fn j+1 G[j]cov {U[j]} F j+1 n G[j] For one-step prediction: Σ[l + 1 l] = F[l]Σ[l l]f [l] + G[l]Q[l]G [l]. F j+1 n G[j] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

15 Summary of Main Results MMSE state estimator (only shown so far for the case l = 0): ˆX[l l] = ˆX[l ( l 1] + K[l] Y [l] H[l] ˆX[l ) l 1] One-step MMSE state predictor: ˆX[l + 1 l] = F[l] ˆX[l l] ECM of MMSE state estimator (only shown so far for the case l = 0): Σ[l l] = Σ[l l 1] K[l]H[l]Σ[l l 1] ECM of MMSE one-step state predictor: Σ[l + 1 l] = F[l]Σ[l l]f [l] + G[l]Q[l]G [l] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

16 Induction: MMSE State Estimator for Arbitrary l Assume that and ˆX[l l] = ˆX[l l 1] + K[l] ( Y [l] H[l] ˆX[l ) l 1] Σ[l l] = Σ[l l 1] K[l]H[l]Σ[l l 1] are true for some value of l. We want to show that these expressions are also true for l + 1 using the fact that the prediction equations have already been shown to be true for any l. Fact (not too difficult to prove): The innovation Ỹ [l + 1 l] := Y [l + 1] H[l + 1] ˆX[l + 1 l] is a zero-mean Gaussian random vector uncorrelated with Y [0],..., Y [l]. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

17 Useful Result 0 ECE531 Lecture 11: The Kalman Filter Assume that X, Y, and Z are jointly Gaussian and that Y and Z are uncorrelated with each other. Denote C as the covariance matrix between the subscripted random variables. We can use the conditional mean result for jointly Gaussian random vectors to write E {X Y,Z} = E{X} + [ ] [ ] 1 [ ] C C XY C Y Y C Y Z Y E{Y } XZ C ZY C ZZ Z E{Z} Under our assumption that Y and Z are uncorrelated, both C Y Z and C ZY are equal to zero. The matrix inverse is then easy to compute and we have the useful result E {X Y,Z} = E{X} + C XY C 1 Y Y (Y E{Y }) + C XZC 1 ZZ (Z E{Z}) = E{X Y } + C XZ CZZ 1 (Z E{Z}). Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

18 Induction: Conditional Mean ˆX[l + 1 l + 1] := E { X[l + 1] Y0 l+1 } = E { X[l + 1] Y0 l, Y [l + 1]} } = E {X[l + 1] Y0 l, Ỹ [l + 1 l] = E { X[l + 1] Y0 l } ) + cov (X[l + 1], Ỹ [l + 1 l] [ 1 cov (Ỹ [l + 1 l], Ỹ [l + 1 l])] Ỹ [l + 1 l] = ˆX[l ) + 1 l] + cov (X[l + 1], Ỹ [l + 1 l] [ 1 cov (Ỹ [l + 1 l], Ỹ [l + 1 l])] Ỹ [l + 1 l] } where we have used useful result 0 and the fact that E {Ỹ [l + 1 l] = 0 to write the second to last expression. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

19 Induction: Conditional Mean Defining ) [ )] 1 K[l + 1] := cov (X[l + 1],Ỹ [l + 1 l] cov (Ỹ [l + 1 l], Ỹ [l + 1 l] and substituting for the innovation Ỹ [l + 1 l] := Y [l + 1] H[l + 1] ˆX[l + 1 l], we get the result we wanted: ˆX[l + 1 l + 1] = ˆX[l + 1 l] + K[l + 1] ( Y [l + 1] H[l + 1] ˆX[l ) + 1 l] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

20 Induction: Error Covariance Matrix The final step is to show Σ[l + 1 l + 1] = Σ[l + 1 l] K[l + 1]H[l + 1]Σ[l + 1 l] To see this, start from the definition n o Σ[l + 1 l + 1] = E ˆX[l + 1 l + 1] X[l + 1] ˆX[l + 1 l + 1] X[l + 1] and substitute our result ˆX[l + 1 l + 1] = ˆX[l + 1 l] + K[l + 1] ( Y [l + 1] H[l + 1] ˆX[l ) + 1 l] After expanding the outer product, there are four expectations that need to be computed... Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

21 Useful Result Number 1 cov X[l + 1], Ỹ [l + 1 l] n = E X[l + 1] ˆX[l + 1 l] o Y [l + 1] H[l + 1] ˆX[l + 1 l] n = E X[l + 1] ˆX[l + 1 l] o H[l + 1](X[l + 1] ˆX[l + 1 l]) + V [l + 1] n = E X[l + 1] ˆX[l + 1 l] o X[l + 1] ˆX[l + 1 l] H [l + 1] = Σ[l + 1 l]h [l + 1] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

22 Useful Result Number 2 cov Ỹ [l + 1 l], Ỹ [l + 1 l] n = E Y [l + 1] H[l + 1] ˆX[l + 1 l] o Y [l + 1] H[l + 1] ˆX[l + 1 l] n = E H[l + 1](X[l + 1] ˆX[l + 1 l]) + V [l + 1] o H[l + 1](X[l + 1] ˆX[l + 1 l]) + V [l + 1] n = H[l + 1]E X[l + 1] ˆX[l + 1 l] o X[l + 1] ˆX[l + 1 l] H [l + 1] +R[l + 1] = H[l + 1]Σ[l + 1 l]h [l + 1] + R[l + 1] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

23 j ff E X[l + 1] ˆX[l + 1 l] X[l + 1] ˆX[l + 1 l] j K[l + 1]E Y [l + 1] H[l + 1] ˆX[l + 1 l] X[l + 1] ˆX[l ff + 1 l] j E X[l + 1] ˆX[l + 1 l] Y [l + 1] H[l + 1] ˆX[l ff + 1 l] K [l + 1] + j K[l + 1]E Y [l + 1] H[l + 1] ˆX[l + 1 l] Y [l + 1] H[l + 1] ˆX[l ff + 1 l] K [l + 1] The first line is simply Σ[l + 1 l]. The third line was solved in useful result number 1 and is Σ[l + 1 l]h [l + 1]K [l + 1] Inspection of K [l + 1] reveals that the third line is a symmetric matrix. Hence, the second line is equal to the third line. The fourth line was solved in useful result number 2... Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

24 From useful result number 2, we can write the fourth line as K[l + 1](H[l + 1]Σ[l + 1 l]h [l + 1] + R[l + 1])K [l + 1] This can be simplified a bit since K [l + 1] = (H[l + 1]Σ[l + 1 l]h [l + 1] + R[l + 1]) 1 H[l + 1]Σ[l + 1 l] Substituting for the K [l + 1] at the end of the equation, we can write the fourth line as K[l + 1]H[l + 1]Σ[l + 1 l] which is the same as the second and third lines, except for a sign change. Putting it all together, we have Σ[l + 1 l + 1] = Σ[l + 1 l] K[l + 1]H[l + 1]Σ[l + 1 l] which is the result we wanted to show. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

25 The Discrete-Time Kalman-Bucy Filter (1961) Theorem Under the squared error cost assignment, the linear system model, and the white Gaussian input, noise, and initial state assumptions discussed in Lecture 10, the optimal estimates for the current state (filtering) and the next state (prediction) are given recursively as ˆX[l l] = ˆX[l ( l 1] + K[l] Y [l] H[l] ˆX[l ) l 1] for l = 0, 1,... ˆX[l + 1 l] = F[l] ˆX[l l] for l = 0, 1,... with the initialization ˆX[0 1] = m[0] and where the matrix K[l] = Σ[l l 1]H [l] ( H[l]Σ[l l 1]H [l] + R[l] ) 1 with Σ[l l 1] := cov {X[l] Y [0],...,Y [l 1]} and R[l] := cov {V [l]}. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

26 Kalman Filter: Summary of General Recursion Initialization (predictions): ˆX[0 1] = m[0] Σ[0 1] = Σ[0] Recursion, beginning with l = 0: ( K[l] = Σ[l l 1]H [l] H[l]Σ[l l 1]H [l] + R[l] ˆX[l l] = ˆX[l ( l 1] + K[l] Y [l] H[l] ˆX[l ) l 1] Σ[l l] = Σ[l l 1] K[l]H[l]Σ[l l 1] ˆX[l + 1 l] = F[l] ˆX[l l] Σ[l + 1 l] = F[l]Σ[l l]f[l] + G[l]Q[l]G [l] ) 1 Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

27 The Kalman Filter ECE531 Lecture 11: The Kalman Filter ˆX[l l] Y [l] K[l] F[l] ˆX[l + 1 l] - H[l] ˆX[l l 1] unit delay Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

28 The Kalman Filter: Time-Invariant Case ˆX[l l] Y [l] K[l] F ˆX[l + 1 l] - H ˆX[l l 1] unit delay K[l] = ˆX[l l] = Σ[l l 1]H ( HΣ[l l 1]H + R ) 1 ˆX[l ( l 1] + K[l] Y [l] H ˆX[l ) l 1] Σ[l l] = Σ[l l 1] K[l]HΣ[l l 1] ˆX[l + 1 l] = F ˆX[l l] Σ[l + 1 l] = FΣ[l l]f + GQG Even though F, G, H, Q, and R are time-invariant, K[l] is still time-varying. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

29 Example: One-Dimensional Motion Our one-dimensional motion system: The state X[l] = [ x[l], v[l] ] (position, velocity). The input U[l] = a[l] i.i.d. N(0, σ 2 a ) (acceleration). The state update equation is X[l + 1] = where T is the sampling time. [ ] 1 T 0 1 }{{} F [ 0 X[l] + T] U[l] }{{} G Suppose we observe the position of the particle in noise. Then Y [n] = [ 1 0 ] X[n] + V [n] }{{} H where V [l] i.i.d. N(0, σv 2 ) and is independent of U[l]. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

30 Example: One-Dimensional Motion velocity v squared error true state predicted state estimated state position x T = 0.1, σ 2 a = 1, and σ2 v time = 0.1 in this example. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

31 Example: One-Dimensional Motion Kalman Gain velocity v Kalman gain position x velocity v position x T = 0.1, σ 2 a = 1, and σ2 v time = 0.1 in this example. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

32 Example: One-Dimensional Motion MSE true state predicted state estimated state mean squared error time T = 0.1, σ 2 a = 1, σ 2 v = 0.1; averaged over 5000 runs. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

33 Computational Requirements: Batch MMSE Estimation Suppose we ve received observations Y [0],...,Y [l] and we wish to estimate the state X[l] from these observations. If we just did this as a batch operation, what is the MMSE estimate of X[l]? ˆX[l l] = E {X[l] Y [0],...,Y [l]} [ ] 1 ( ) = E{X[l]} + cov{x[l],y0 l } cov{y0 l,y0 l } Y0 l E{Y0 L } where Y l 0 := [ Y [0],...,Y [l] ]. Recall that, for each l, Y [l] R k. What are the dimensions of the matrix inverse that we have to compute here? Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

34 Computational Requirements: Kalman Filter Note that the Kalman filter recursion also has a matrix inverse. ( K[l] = Σ[l l 1]H [l] H[l]Σ[l l 1]H [l] + R[l] ) 1 What are the dimensions of the matrix inverse that we have to compute here? Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

35 Computational Requirements: Kalman Filter Note that the error covariance matrix and Kalman gain updates ( ) 1 K[l] = Σ[l l 1]H [l] H[l]Σ[l l 1]H [l] + R[l] Σ[l l] = Σ[l l 1] K[l]H[l]Σ[l l 1] Σ[l + 1 l] = F[l]Σ[l l]f[l] + G[l]Q[l]G [l] do not depend on the observation. This means that, if F[l], G[l], H[l], Q[l], and R[l] are known in advance, e.g. they are time invariant, the error covariance matrices (prediction and estimation) as well as the Kalman gain can be computed in advance. Pre-computation of K[l], Σ[l l], and Σ[l + 1 l] makes the real-time computational requirements of the Kalman filter very modest: State estimate: One matrix-vector product and one vector addition. State prediction: One matrix-vector product. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

36 Static State Estimation with the Kalman Filter Consider the special case F[l] I and G[l] 0. In this case, we have a static parameter estimation problem since the state does not change over time. It should be clear that X[l] X[0] = X. What does the Kalman filter do in this scenario? Let s look at the recursion replacing F[l] I and G[l] 0 (for simplicity, we also assume that H and R are time-invariant). ) 1 ( K[l] = Σ[l l 1]H HΣ[l l 1]H + R ˆX[l l] = ˆX[l ( l 1] + K[l] Y [l] H ˆX[l ) l 1] Σ[l l] = Σ[l l 1] K[l]HΣ[l l 1] ˆX[l + 1 l] = ˆX[l l] Σ[l + 1 l] = Σ[l l] Both predictions are equal to the last estimates (this should make sense). Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

37 Static State Estimation with the Kalman Filter Since the predictions aren t necessary anymore, the notation and the recursion can be simplified to ( ) 1 K[l] = Σ[l 1]H HΣ[l 1]H + R ˆX[l] = ˆX[l ( 1] + K[l] Y [l] H ˆX[l ) 1] Σ[l] = Σ[l 1] K[l]HΣ[l 1] What is the Kalman filter doing here? It is performing sequential MMSE estimation of the static parameter X. That is, given observations Y [0],Y [1],..., the Kalman filter is a computationally efficient way to sequentially refine the MMSE estimate of an unknown static parameter (state) with each new observation. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

38 Example: Sequential MMSE Estimation of Static Param. X squared error true state estimated state X 0 σ 2 v = 0.1 and H = I in this example time Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

39 Example: Sequential MMSE Estimation of Static Param true state estimated state mean squared error time σ 2 v = 0.1 and H = I; averaged over 5000 runs. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

40 Extension 1: Non-White Input Sequence Suppose the input sequence to our discrete-time dynamical system model was zero-mean and wide-sense stationary, but not white, i.e.: cov {U[k],U[l]} = Q[k l] This causes the state of the system to no longer be a Markov sequence. A lot of our earlier analysis is not going to be valid anymore. What can we do? The basic idea is to write an equivalent discrete-time dynamical system model with a white input signal and an augmented state. Then the usual Kalman filter can be applied. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

41 Non-White Input Sequence Example Example: Suppose we have our one-dimensional motion model again but this time the Gussian acceleration input has autocorrelation Q[k l] = cov {a[k],a[l]} = σ 2 a α k l for α (0,1). Note that this covariance can be achieved by the difference equation a[l + 1] = αa[l] + W[l] where W[l] is a zero-mean stationary white Gaussian sequence with var {W[l]} = σ 2 w = (1 α 2 )σ 2 a. The new system has input U[l] = W[l] and state X[l] = [x[l],v[l],a[l]]. What do the matrices F and G look like? Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

42 Extension 2: Deterministic Input Component Sometimes the state update equation for the discrete-time dynamical model is specified as X[l + 1] = F[l]X[l] + G[l]U[l] + s[l] where s[l] is a known sequence. How does this affect the Kalman Filter recursion? Intuitively, the presence of a known sequence in the state update equation shouldn t degrade the performance of the MMSE estimator. It is not too difficult to show that the only change to the Kalman filter recursion is in the prediction step: ˆX[l + 1 l] = F[l] ˆX[l l] + s[l] The error covariance matrix of the prediction is unchanged. Why? Hence the expressions for the Kalman gain, the state estimate, and the error covariance matrix of the estimate remain the same. The details are left as an exercise (Poor Chapter V.2). Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

43 Extension 3: Correlated Input and Measurement Noise Suppose now that the current input to the dynamical system and the current measurement noise are correlated, i.e.: { C k = l cov {U[k], V [l]} = 0 otherwise Again, a lot of our earlier analysis is not going to be valid anymore. What can we do? The basic idea here is transform the dynamical system to an equivalent dynamical system with uncorrelated input and measurement noise sequences. After deriving the new white input sequence (Ũ[k], uncorrelated with V [k] and with covariance Q CR 1 C ), you will end up with a new state update equation X[l + 1] = (F CR 1 H)X[l] + GŨ[k] + CR 1 s[k] where CR 1 s[k] is a known input (see Extension 2). The details are left as an exercise (Poor Chapter V.4). Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

44 Types of State Prediction Fixed interval Fixed point Fixed lead Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

45 Fixed Interval Prediction Given observations Y [0],..., Y [L 1], we want to predict the states X[L + j] for j = 0, 1,... in a computationally efficient manner. Earlier, we derived a general expression for MMSE prediction of the state X[n + 1] given observations Y [0],...,Y [l]: where F t n := ˆX[n + 1 l] = F l n ˆX[l l] (1) { F[n]F[n 1] F[t] t n I t > n. This result can be written recursively for our problem as ˆX[L + j L 1] = F L 1 L+j 1 ˆX[L 1 L 1] = F[L + j 1] ˆX[L + j 1 L 1] The associated error covariance matrix recursion can be computed similarly using our earlier results. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

46 Fixed Point Prediction Given observations Y [0],...,Y [l], we want to predict the state X[L] in a computationally efficient manner for l = 0,...,L 1. We can use the standard Kalman recursion to rewrite (1) as ˆX[L l] = FL 1 l ˆX[l l] { } = FL 1 l ˆX[l l 1] + K[l] Ỹ [l l 1] = FL 1 {F[l l 1] ˆX[l } 1 l 1] + K[l]Ỹ [l l 1] = F l 1 ˆX[l L 1 1 l 1] + FL 1 l K[l]Ỹ [l l 1] = ˆX[L l 1] + FL 1 l K[l]Ỹ [l l 1] which gives a recursion for ˆX[n + 1 l]. Note that it is assumed that K[l] and the innovations Ỹ [l l 1] were computed as part of the standard Kalman filter recursion. A similar procedure can be followed to derive the recursion for the ECM. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

47 Fixed Lead Prediction Given observations Y [0],...,Y [l], we want to estimate the state X[l + L] in a computationally efficient manner for l = 0,1,.... For a fixed lead of L samples, the recursion can be derived as ˆX[l + L l] = F[l + L 1] ˆX[l + L 1 l 1] + Fl+L 1 l K[l]Ỹ [l l 1] The error covariance matrix recursion can be computed by propagating the ECM from time l + 1 to time l + L using the standard Kalman filter ECM recursion Σ[l + k l] = F[l + k 1]Σ[l + k 1 l]f [l + k 1] starting at k = 1 and recursing up to k = L. +G[l + k 1]Q[l + k 1]G [l + k 1] Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

48 Types of State Smoothing Fixed interval Fixed point Fixed lag Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

49 Fixed Interval Smoothing Given observations Y [0],..., Y [L 1], we want to estimate (smooth) the state X[l] for l < L 1 in a computationally efficient manner. We assume that the standard Kalman estimates ˆX[l l] and predictions ˆX[l l 1] and associated error covariance matrices for l = 0, 1,..., L 1 have already been computed, but now we want to go back and smooth our earlier estimates in light of the full set of observations Y [0],..., Y [L 1]. The (backward) recursion then is to specify ˆX[l L 1] in terms of ˆX[l + 1 L 1] for l = L 2, L 3,...,0. We will also need a backward recursion for the error covariance matrix. The main idea behind the derivation: 1. Set up the joint pdf of the states X[l] and X[l + 1], conditioned on the observations Y [0],..., Y [L 1]. This conditional PDF will, of course, be Gaussian. 2. Write an expression for the conditional mean at time l in terms of the conditional mean at time l + 1 (backward recursion). Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

50 Fixed Interval Smoothing: The Main Results The backward state recursion can be derived as ˆX[l L 1] = ˆX[l { } l] + C[l] ˆX[l + 1 L 1] ˆX[l + 1 l] } {{ } smoothing residual where C[l] = Σ[l l]f [l]σ 1 [l + 1 l] is the smoother gain. The backward recursion for the smoothed state error covariance matrix can also be derived as Σ[l L 1] = Σ[l l] + C[l] {Σ[l + 1 L 1] Σ[l + 1 l]} C [l]. The backward recursions for other types of smoothing can also be derived similarly. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

51 Conclusions ECE531 Lecture 11: The Kalman Filter The discrete-time Kalman-Bucy filter is often called the Workhorse of Estimation. Computationally efficient and no loss of optimality in the linear Gaussian case. We will see that the discrete-time Kalman-Bucy filter is also optimum in dynamic (or static) MMSE parameter estimation problems among the class of linear MMSE estimators even if the noise is non-gaussian. Even more extensions: Continuous-time Kalman-Bucy filter. Extended Kalman filter (nonlinear but differentiable state update, input, and output functions). Unscented Kalman filter (also allows for nonlinear functions, better convergence properties than the EKF in some cases). etc. Worcester Polytechnic Institute D. Richard Brown III 09-Apr / 51

ECE531 Lecture 10b: Dynamic Parameter Estimation: System Model

ECE531 Lecture 10b: Dynamic Parameter Estimation: System Model ECE531 Lecture 10b: Dynamic Parameter Estimation: System Model D. Richard Brown III Worcester Polytechnic Institute 02-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 02-Apr-2009 1 / 14 Introduction

More information

ECE531 Lecture 12: Linear Estimation and Causal Wiener-Kolmogorov Filtering

ECE531 Lecture 12: Linear Estimation and Causal Wiener-Kolmogorov Filtering ECE531 Lecture 12: Linear Estimation and Causal Wiener-Kolmogorov Filtering D. Richard Brown III Worcester Polytechnic Institute 16-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 16-Apr-2009

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator Kalman Filtering Namrata Vaswani March 29, 2018 Notes are based on Vincent Poor s book. 1 Kalman Filter as a causal MMSE estimator Consider the following state space model (signal and observation model).

More information

Lecture Note 12: Kalman Filter

Lecture Note 12: Kalman Filter ECE 645: Estimation Theory Spring 2015 Instructor: Prof. Stanley H. Chan Lecture Note 12: Kalman Filter LaTeX prepared by Stylianos Chatzidakis) May 4, 2015 This lecture note is based on ECE 645Spring

More information

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011 UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, 2014 Homework Set #6 Due: Thursday, May 22, 2011 1. Linear estimator. Consider a channel with the observation Y = XZ, where the signal X and

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

Lecture 4: Least Squares (LS) Estimation

Lecture 4: Least Squares (LS) Estimation ME 233, UC Berkeley, Spring 2014 Xu Chen Lecture 4: Least Squares (LS) Estimation Background and general solution Solution in the Gaussian case Properties Example Big picture general least squares estimation:

More information

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

ECE 673-Random signal analysis I Final

ECE 673-Random signal analysis I Final ECE 673-Random signal analysis I Final Q ( point) Comment on the following statement "If cov(x ; X 2 ) 0; then the best predictor (without limitation on the type of predictor) of the random variable X

More information

ECE534, Spring 2018: Solutions for Problem Set #5

ECE534, Spring 2018: Solutions for Problem Set #5 ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

ECE531 Lecture 8: Non-Random Parameter Estimation

ECE531 Lecture 8: Non-Random Parameter Estimation ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction

More information

Open Economy Macroeconomics: Theory, methods and applications

Open Economy Macroeconomics: Theory, methods and applications Open Economy Macroeconomics: Theory, methods and applications Lecture 4: The state space representation and the Kalman Filter Hernán D. Seoane UC3M January, 2016 Today s lecture State space representation

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model 5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

Statistics Homework #4

Statistics Homework #4 Statistics 910 1 Homework #4 Chapter 6, Shumway and Stoffer These are outlines of the solutions. If you would like to fill in other details, please come see me during office hours. 6.1 State-space representation

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

ECE531 Screencast 5.5: Bayesian Estimation for the Linear Gaussian Model

ECE531 Screencast 5.5: Bayesian Estimation for the Linear Gaussian Model ECE53 Screencast 5.5: Bayesian Estimation for the Linear Gaussian Model D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III / 8 Bayesian Estimation

More information

A Theoretical Overview on Kalman Filtering

A Theoretical Overview on Kalman Filtering A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering

More information

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Kalman Filter. Man-Wai MAK

Kalman Filter. Man-Wai MAK Kalman Filter Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S. Gannot and A. Yeredor,

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Final Examination Solutions (Total: 100 points)

Final Examination Solutions (Total: 100 points) Final Examination Solutions (Total: points) There are 4 problems, each problem with multiple parts, each worth 5 points. Make sure you answer all questions. Your answer should be as clear and readable

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011 UCSD ECE53 Handout #40 Prof. Young-Han Kim Thursday, May 9, 04 Homework Set #8 Due: Thursday, June 5, 0. Discrete-time Wiener process. Let Z n, n 0 be a discrete time white Gaussian noise (WGN) process,

More information

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

Solutions to Homework Set #6 (Prepared by Lele Wang)

Solutions to Homework Set #6 (Prepared by Lele Wang) Solutions to Homework Set #6 (Prepared by Lele Wang) Gaussian random vector Given a Gaussian random vector X N (µ, Σ), where µ ( 5 ) T and 0 Σ 4 0 0 0 9 (a) Find the pdfs of i X, ii X + X 3, iii X + X

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE 53 Handout #46 Prof. Young-Han Kim Thursday, June 5, 04 Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei). Discrete-time Wiener process. Let Z n, n 0 be a discrete time white

More information

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26. Estimation: Regression and Least Squares

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26. Estimation: Regression and Least Squares CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26 Estimation: Regression and Least Squares This note explains how to use observations to estimate unobserved random variables.

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

Appendix A : Introduction to Probability and stochastic processes

Appendix A : Introduction to Probability and stochastic processes A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of

More information

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution ECE 54 Stochastic Signals and Systems Problem Set Solution Problem Solutions : Yates and Goodman,..4..7.3.3.4.3.8.3 and.8.0 Problem..4 Solution Since E[Y (t] R Y (0, we use Theorem.(a to evaluate R Y (τ

More information

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations Applied Mathematical Sciences, Vol. 8, 2014, no. 4, 157-172 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ams.2014.311636 Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

ENGG2430A-Homework 2

ENGG2430A-Homework 2 ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7 UCSD ECE50 Handout #0 Prof. Young-Han Kim Monday, February 6, 07 Solutions to Exercise Set #7. Minimum waiting time. Let X,X,... be i.i.d. exponentially distributed random variables with parameter λ, i.e.,

More information

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7 UCSD ECE50 Handout #4 Prof Young-Han Kim Wednesday, June 6, 08 Solutions to Exercise Set #7 Polya s urn An urn initially has one red ball and one white ball Let X denote the name of the first ball drawn

More information

Econ 424 Time Series Concepts

Econ 424 Time Series Concepts Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length

More information

Lecture 4: Extended Kalman filter and Statistically Linearized Filter

Lecture 4: Extended Kalman filter and Statistically Linearized Filter Lecture 4: Extended Kalman filter and Statistically Linearized Filter Department of Biomedical Engineering and Computational Science Aalto University February 17, 2011 Contents 1 Overview of EKF 2 Linear

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

A TERM PAPER REPORT ON KALMAN FILTER

A TERM PAPER REPORT ON KALMAN FILTER A TERM PAPER REPORT ON KALMAN FILTER By B. Sivaprasad (Y8104059) CH. Venkata Karunya (Y8104066) Department of Electrical Engineering Indian Institute of Technology, Kanpur Kanpur-208016 SCALAR KALMAN FILTER

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t. ECO 513 Fall 2008 C.Sims KALMAN FILTER Model in the form 1. THE KALMAN FILTER Plant equation : s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. Var(ε t ) = Ω, Var(ν t ) = Ξ. ε t ν t and (ε t,

More information

STATISTICAL ORBIT DETERMINATION

STATISTICAL ORBIT DETERMINATION STATISTICAL ORBIT DETERMINATION Satellite Tracking Example of SNC and DMC ASEN 5070 LECTURE 6 4.08.011 1 We will develop a simple state noise compensation (SNC) algorithm. This algorithm adds process noise

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

A new unscented Kalman filter with higher order moment-matching

A new unscented Kalman filter with higher order moment-matching A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

Introduction to Stochastic processes

Introduction to Stochastic processes Università di Pavia Introduction to Stochastic processes Eduardo Rossi Stochastic Process Stochastic Process: A stochastic process is an ordered sequence of random variables defined on a probability space

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

11 - The linear model

11 - The linear model 11-1 The linear model S. Lall, Stanford 2011.02.15.01 11 - The linear model The linear model The joint pdf and covariance Example: uniform pdfs The importance of the prior Linear measurements with Gaussian

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

Properties of the Autocorrelation Function

Properties of the Autocorrelation Function Properties of the Autocorrelation Function I The autocorrelation function of a (real-valued) random process satisfies the following properties: 1. R X (t, t) 0 2. R X (t, u) =R X (u, t) (symmetry) 3. R

More information

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts White Gaussian Noise I Definition: A (real-valued) random process X t is called white Gaussian Noise if I X t is Gaussian for each time instance t I Mean: m X (t) =0 for all t I Autocorrelation function:

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

ARMA Estimation Recipes

ARMA Estimation Recipes Econ. 1B D. McFadden, Fall 000 1. Preliminaries ARMA Estimation Recipes hese notes summarize procedures for estimating the lag coefficients in the stationary ARMA(p,q) model (1) y t = µ +a 1 (y t-1 -µ)

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation. So: How can we evaluate E(X Y )? What is a neural network? How is it useful?

Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation. So: How can we evaluate E(X Y )? What is a neural network? How is it useful? Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation The Story So Far... So: How can we evaluate E(X Y )? What is a neural network? How is it useful? EE 278: Using Neural Networks for

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

2.5 Forecasting and Impulse Response Functions

2.5 Forecasting and Impulse Response Functions 2.5 Forecasting and Impulse Response Functions Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007 UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas

More information

Sliding Window Test vs. Single Time Test for Track-to-Track Association

Sliding Window Test vs. Single Time Test for Track-to-Track Association Sliding Window Test vs. Single Time Test for Track-to-Track Association Xin Tian Dept. of Electrical and Computer Engineering University of Connecticut Storrs, CT 06269-257, U.S.A. Email: xin.tian@engr.uconn.edu

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Zhao, S., Shmaliy, Y. S., Khan, S. & Liu, F. (2015. Improving state estimates over finite data using optimal FIR filtering

More information

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part III: Nonlinear Systems: Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) James B. Rawlings and Fernando V. Lima Department of

More information

Estimation and Prediction Scenarios

Estimation and Prediction Scenarios Recursive BLUE BLUP and the Kalman filter: Estimation and Prediction Scenarios Amir Khodabandeh GNSS Research Centre, Curtin University of Technology, Perth, Australia IUGG 2011, Recursive 28 June BLUE-BLUP

More information

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else ECE 450 Homework #3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3 4 5

More information

Ch. 12 Linear Bayesian Estimators

Ch. 12 Linear Bayesian Estimators Ch. 1 Linear Bayesian Estimators 1 In chapter 11 we saw: the MMSE estimator takes a simple form when and are jointly Gaussian it is linear and used only the 1 st and nd order moments (means and covariances).

More information

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Proceedings of the 17th World Congress The International Federation of Automatic Control Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems Seiichi

More information