A Crash Course on Kalman Filtering

Size: px
Start display at page:

Download "A Crash Course on Kalman Filtering"

Transcription

1 A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall / 64

2 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown Input Estimation The Extended Kalman Filter 2 / 64

3 Linear Systems KVL: u = v c + L di L /dt KCL: i c + i r1 = i r2 + i L i c = C dv c /dt i r1 = v c /R i r2 = (u v c )/R = dv c /dt = 2v c /RC + i L /C + u/rc Define x T = [ ] v c i [ L ] [ ] 2/RC 1/C 1/RC ẋ = x + u = Ax + Bu 1/L 0 1/L What is the output? Suppose y = v r2 = L di L /dt = [ 1 0 ] x + u = Cx + Du 3 / 64

4 Linear Systems Let R = C = L = 1 and u(t) = step function. How can we simulate the system in Matlab? Control System Toolbox Simulink m-file 4 / 64

5 Linear Systems: Continuous-Time Simulation Control System Toolbox: R = 1; L = 1; C = 1; A = [-2/R/C, 1/C ; -1/L, 0]; B = [1/R/C ; 1/L]; C = [-1, 0]; D = 1; sys = ss(a, B, C, D); step(sys) Simulink: LinearRLC1Model.slx m-file: LinearRLC1.m What time step should we use? 5 / 64

6 Linear Systems: Discretization Continuous time: ẋ = Ax + Bu, y = Cx + Du Discrete time: x k+1 = Fx + Gu, y = Cx + Du F = exp(a t) G = (F I )A 1 B where t is the integration step size; I is the identity matrix 6 / 64

7 Linear Systems: Discrete-Time Simulation Control System Toolbox: dt = 0.1; F = expm(a*dt); G = (F - eye(2)) / A * B; sysdiscrete = ss(f, G, C, D, dt); step(sysdiscrete) Simulink: LinearRLC1DiscreteModel.slx m-file: LinearRLC1Discrete.m 7 / 64

8 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown Input Estimation The Extended Kalman Filter 8 / 64

9 Cumulative Distribution Function X = random variable CDF: F X (x) = P(X x) F X (x) [0, 1] F X ( ) = 0 F X ( ) = 1 F X (a) F X (b) if a b P(a < X b) = F X (b) F X (a) 9 / 64

10 Probability Density Function PDF: f X (x) = df X (x) dx F X (x) = f X (x) 0 f X (x) dx = 1 P(a < x b) = Expected value: E[g(X )] = x b a f X (z) dz f X (x) dx g(x)f X (x) dx E(X ) = x = µ x = mean E [ (X x) 2] = σx 2 = variance σ x = standard deviation 10 / 64

11 Probability Random numbers in Matlab: rand and randn Random number seed How can we create a random vector with given covariance R? Probability Density Functions Uniform Distribution Gaussian, or Normal, Distribution 11 / 64

12 Multiple Random Variables Independence: CDF: F XY (x, y) = P(X x, Y y) PDF: f XY (x, y) = 2 F XY (x, y) x y P(X x, Y y) = P(X x)p(y y) for all x, y Covariance: C XY = E[(X X )(Y Ȳ ] = E(XY ) X Ȳ Correlation: R XY = E(XY ) 12 / 64

13 Random Vectors X = [ X 1 X 2 ] CDF: F X (x) = P(X 1 x 1, X 2 x 2 ) pdf: f X (x) = 2 F X (x) x 1 x 2 Autocorrelation: R X = E[XX T ] > 0 Autocovariance: C X = E[(X X )(X X ) T ] > 0 Gaussian RV: PDF(x) = [ ] 1 1 (2π) n/2 exp C X 1/2 2 (x x)t C 1 X (x x) If Y = AX + b, then Y N(A x + b, AC X A T ) 13 / 64

14 Stochastic Processes A stochastic process X (t) is an RV that varies with time If X (t 1 ) and X (t 2 ) are independent t 1 t 2 then X (t) is white Otherwise, X (t) is colored Examples: The high temperature on a given day The closing price of the stock market Measurement noise in a voltmeter The amount of sleep you get each night 14 / 64

15 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown Input Estimation The Extended Kalman Filter 15 / 64

16 State Means and Covariances x k = F k 1 x k 1 + G k 1 u k 1 + w k 1 w k (0, Q k ) x k = E(x k ) = F k 1 x k 1 + G k 1 u k 1 (x k x k )( ) T = (F k 1 x k 1 + G k 1 u k 1 + w k 1 x k )( ) T = [F k 1 (x k 1 x k 1 ) + w k 1 ][ ] T = F k 1 (x k 1 x k 1 )(x k 1 x k 1 ) T F T k 1 + w k 1 w T k 1 + F k 1(x k 1 x k 1 )w T k 1 + w k 1 (x k 1 x k 1 ) T Fk 1 T P k = E [(x ] k x k )( ) T = F k 1 P k 1 F T k 1 + Q k 1 This is the discrete-time Lyapunov Equation, or Stein Equation 16 / 64

17 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown Input Estimation The Extended Kalman Filter 17 / 64

18 Least Squares Estimation Suppose x is a constant vector Vector measurement at time k: y k = H k x + v k, v k (0, R k ) Estimate: ˆx k = ˆx k 1 + K k (y k H k ˆx k 1 ) This is a recursive estimator re cur sive: adjective, meaning recursive Our goal: Find the best estimator gain K k 18 / 64

19 Least Squares Estimation What is the mean of the estimation error? E(ɛ x,k ) = E(x ˆx k ) = E[x ˆx k 1 K k (y k H k ˆx k 1 )] = E[ɛ x,k 1 K k (H k x + v k H k ˆx k 1 )] = E[ɛ x,k 1 K k H k (x ˆx k 1 ) K k v k ] = (I K k H k )E(ɛ x,k 1 ) K k E(v k ) E(ɛ x,k ) = 0 if E(v k ) = 0 and E(ɛ x,k 1 ) = 0, regardless of K k Unbiased estimator 19 / 64

20 Least Squares Estimation Objective function: J k = E[(x 1 ˆx 1 ) 2 ] + + E[(x n ˆx n ) 2 ] = E ( ɛ 2 x1,k + + ) ɛ2 xn,k ( ) = E ɛ T x,k ɛ x,k ] = E [Tr(ɛ x,k ɛ T x,k ) = Tr P k 20 / 64

21 Least Squares Estimation P k = E(ɛ x,k ɛ T x,k ) = E {[(I } K k H k )ɛ x,k 1 K k v k ][ ] T = (I K k H k )E(ɛ x,k 1 ɛ T x,k 1 )(I K kh k ) T K k E(v k ɛ T x,k 1 )(I K kh k ) T (I K k H k )E(ɛ x,k 1 vk T )K k T + K k E(v k vk T )K k T = (I K k H k )P k 1 (I K k H k ) T + K k R k Kk T 21 / 64

22 Least Squares Estimation Recall that Tr(ABAT ) A = 2AB if B is symmetric J k K k = 2(I K k H k )P k 1 ( Hk T ) + 2K kr k = 0 K k R k = (I K k H k )P k 1 H T k K k (R k + H k P k 1 Hk T ) = P k 1Hk T K k = P k 1 Hk T (H kp k 1 Hk T + R k) 1 22 / 64

23 Recursive least squares estimation of a constant 1 Initialization: ˆx 0 = E(x), P 0 = E[(x ˆx 0 )(x ˆx 0 ) T ] If no knowledge about x is available before measurements are taken, then P 0 = I. If perfect knowledge about x is available before measurements are taken, then P 0 = 0. 2 For k = 1, 2,, perform the following. 1 Obtain measurement y k : y k = H k x + v k where v k (0, R k ) and E(v i v k ) = R k δ k i (white noise) 2 Measurement update of estimate: K k = P k 1 H T k (H k P k 1 H T k + R k ) 1 ˆx k = ˆx k 1 + K k (y k H k ˆx k 1 ) P k = (I K k H k )P k 1 (I K k H k ) T + K k R k K T k 23 / 64

24 Alternate Estimator Equations K k = P k 1 H T k (H kp k 1 H T k + R k) 1 = P k H T k R 1 k P k = (I K k H k )P k 1 (I K k H k ) T + K k R k Kk T = (P 1 H k) 1 Example: RLS.m k 1 + HT k R 1 k = (I K k H k )P k 1 (Valid only for optimal K k ) 24 / 64

25 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown Input Estimation The Extended Kalman Filter 25 / 64

26 The Kalman filter x k = F k 1 x k 1 + G k 1 u k 1 + w k 1 y k = H k x k + v k w k (0, Q k ) v k (0, R k ) E[w k wj T ] = Q k δ k j E[v k vj T ] = R k δ k j E[v k wj T ] = 0 ˆx + k = E[x k y 1, y 2,, y k ] = a posteriori estimate P + k = E[(x k ˆx + k )(x k ˆx + k )T ] ˆx k = E[x k y 1, y 2,, y k 1 ] = a priori estimate P k = E[(x k ˆx k )(x k ˆx k )T ] ˆx k k+n = E[x k y 1, y 2,, y k,, y k+n ] = smoothed estimate ˆx k k M = E[x k y 1, y 2,, y k M ] = predicted estimate 26 / 64

27 Time Update Equations Initialization: ˆx 0 + = E(x 0 ) ˆx 1 = F 0ˆx G 0u 0 P1 = F 0 P 0 + F 0 T + Q 0 Generalize: ˆx k = F k 1ˆx + k 1 + G k 1u k 1 P k = F k 1 P + k 1 F k 1 T + Q k 1 These are the Kalman filter time update equations 27 / 64

28 Measurement Update Equations Recall the RLS estimate of a constant x: K k = P k 1 H T k (H kp k 1 H T k + R k) 1 ˆx k = ˆx k 1 + K k (y k H k ˆx k 1 ) P k = (I K k H k )P k 1 (I K k H k ) T + K k R k K T k ˆx k 1, P k 1 = estimate and covariance before measurement y k ˆx k, P k = estimate and covariance after measurement y k Least squares estimator Kalman filter ˆx k 1 = estimate before y k = ˆx k = a priori estimate P k 1 = covariance before y k = P k = a priori covariance ˆx k = estimate after y k = ˆx + k = a posteriori estimate P k = covariance after y k = P + k = a posteriori covariance 28 / 64

29 Measurement Update Equations Recursive Least Squares: K k = P k 1 Hk T (H kp k 1 Hk T + R k) 1 ˆx k = ˆx k 1 + K k (y k H k ˆx k 1 ) P k = (I K k H k )P k 1 (I K k H k ) T + K k R k Kk T Kalman Filter: K k = P k HT k (H kp k HT k + R k) 1 ˆx + k = ˆx k + K k(y k H k ˆx k ) P + k = (I K k H k )P k (I K kh k ) T + K k R k Kk T These are the Kalman filter measurement update equations 29 / 64

30 Kalman Filter Equations 1 State equations: x k = F k 1 x k 1 + G k 1 u k 1 + w k 1 y k = H k x k + v k E(w k wj T ) = Q k δ k j, E(v k vj T ) = R k δ k j, E(w k vj T ) = 0 2 Initialization: ˆx + 0 = E(x 0), P + 0 = E[(x 0 ˆx + 0 )(x 0 ˆx + 0 )T ] 3 For each time step k = 1, 2, P k = F k 1 P + k 1 F T k 1 + Q k 1 K k = P k HT k (H kp k HT k + R k) 1 = P + k HT k R 1 k ˆx k = F k 1ˆx + k 1 + G k 1u k 1 = a priori state estimate ˆx + k = ˆx k + K k(y k H k ˆx k ) = a posteriori state estimate P + k = (I K k H k )P k (I K kh k ) T + K k R k Kk T [ ] 1 = (P k ) 1 + Hk T R 1 k H k = (I K k H k )P k 30 / 64

31 Kalman Filter Properties Define estimation error x k = x k ˆx k Problem: min E [ x T k S k x k ], where Sk > 0 If {w k } and {v k } are Gaussian, zero-mean, uncorrelated, and white, then the Kalman filter solves the problem. If {w k } and {v k } are zero-mean, uncorrelated, and white, then the Kalman filter is the best linear solution to the problem. If {w k } and {v k } are correlated or colored, then the Kalman filter can be easily modified to solve the problem. For nonlinear systems, the Kalman filter can be modified to approximate the solution to the problem. 31 / 64

32 Kalman Filter Example: DiscreteKFEx1.m ṙ v ȧ = r v a + w = ẋ = Ax + w x k+1 = Fx k + w k F = exp(at ) = w k (0, Q k ) 1 T T 2 /2 0 1 T ˆx k = F ˆx + k 1 P k = FP + k 1 F T + Q k 1 y k = H k x k + v k = [ ] x k + v k v k (0, R k ), R k = σ 2 32 / 64

33 Kalman Filter Divergence Kalman filter theory is based on several assumptions. How to improve filter performance in the real world: Increase arithmetic precision Square root filtering Use a fading-memory Kalman filter Use fictitious process noise Use a more robust filter (e.g., H-infinity) 33 / 64

34 Modeling Errors True System: x 1,k+1 = x 1,k + x 2,k x 2,k+1 = x 2,k y k = x 1,k + v k v k (0, 1) Incorrect Model: x 1,k+1 = x 1,k y k = x k + v k w k (0, Q), Q = 0 v k (0, 1) 34 / 64

35 Modeling Errors true state estimated state time Figure: Kalman filter divergence due to mismodeling 35 / 64

36 Fictitious Process Noise true state xhat (Q = 1) xhat (Q = 0.1) xhat (Q = 0.01) xhat (Q = 0) time Figure: Kalman filter improvement due to fictitious process noise 36 / 64

37 Fictitious Process Noise Q = 1 Q = 0.1 Q = 0.01 Q = time Figure: Kalman gain for various values of process noise 37 / 64

38 The Continuous-Time Kalman Filter Discretize: ẋ = Ax + Bu + w y = Cx + v w (0, Q c ) v (0, R c ) x k = Fx k 1 + Gu k 1 + w k 1 y k = Hx k + v k F = exp(at ) (I + AT ) for small T G = (exp(at ) I )A 1 B BT for small T H = C w k (0, Q), Q = Q c T v k (0, R), R = R c /T 38 / 64

39 The Continuous-Time Kalman Filter Recall the discrete-time Kalman gain: lim T 0 K k = P k HT (HP k HT + R) 1 K k = P k C T (CP k C T + R c /T ) 1 T = P k C T (CP k C T T + R c ) 1 K k T = P k C T Rc 1 39 / 64

40 The Continuous-Time Kalman Filter Recall the discrete-time estimation error covariance equations: P + k = (I K k H)P k P k+1 = FP + k F T + Q = (I + AT )P + k (I + AT )T + Q c T, for small T = P + k + (AP+ k + P+ k AT + Q c )T + AP + k AT T 2 = (I K k C)P k + AP+ k AT T 2 + [A(I K k C)P k + (I K kc)p k AT + Q c ]T P k+1 P k T = K kcp k + AP + T k AT T + (AP k + AK kcp k + P k AT K k CP k AT + Q c ) Ṗ = P k+1 lim k T 0 T = PC T Rc 1 CP + AP + PA T + Q c 40 / 64

41 The Continuous-Time Kalman Filter Recall the discrete-time state estimate equations: ˆx k = F ˆx + k 1 + Gu k 1 ˆx + k = ˆx k + K k(y k H ˆx k ) = F ˆx + k 1 + Gu k 1 + K k (y k HF ˆx + k 1 HGu k 1) (I + AT )ˆx + k 1 + BTu k 1 + K k (y k C(I + AT )ˆx + k 1 CBTu k 1), for small T = ˆx + k 1 + AT ˆx + k 1 + BTu k 1 + PC T Rc 1 T (y k C ˆx + k 1 CAT ˆx + k 1 CBTu k 1) ˆx ˆx + k = lim T 0 ˆx + k 1 T = Aˆx + Bu + PC T Rc 1 (y C ˆx) ˆx = Aˆx + Bu + K(y C ˆx) K = PC T R 1 c 41 / 64

42 The Continuous-Time Kalman Filter Continuous-time system dynamics and measurement: ẋ = Ax + Bu + w y = Cx + v w (0, Q c ) v (0, R c ) Continuous-time Kalman filter equations: ˆx(0) = E[x(0)] P(0) = E[(x(0) ˆx(0))(x(0) ˆx(0)) T ] K = PC T Rc 1 ˆx = Aˆx + Bu + K(y C ˆx) Ṗ = PC T Rc 1 CP + AP + PA T + Q c What if y includes the input also? That is, y = Cx + Du? 42 / 64

43 Example: Regenerative Knee Prosthesis 43 / 64

44 System Equations 44 / 64

45 Input Torque M D. Winter, Biomechanics and Motor Control of Human Movement, 4th Edition, Wiley, 2009, Appendix A bcs.wiley.com/he-bcs/books?action=resource&bcsid=5453&itemid= &resourceid= / 64

46 State Equations x 1 = φ k x 2 = φ k x 3 = i A x 4 = v C J T = J G + n 2 J M ẋ = K/J T 0 na/j T 0 0 an/l A R A /L A u/l A 0 0 u/c 0 [ ] y = x + v x + 0 1/J T 0 0 M K + w Matlab program: RegenerationKalman.m 46 / 64

47 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown Input Estimation The Extended Kalman Filter 47 / 64

48 Unknown Input Estimation Continuous-time system dynamics and measurement: Consider f as a state: ẋ = Ax + Bu + f + w y = Cx + v w (0, Q c ), v (0, R c ) z = [ x T f T ] T [ ] [ A I B ż = z y = [ C 0 ] [ v z + 0 ] [ w u + w (0, Q), Q = diag(qc, Q ) v (0, R), R = diag(rc, 0) w is fictitious process noise, and Q is a tuning parameter ] w ] 48 / 64

49 Unknown Input Estimation: Rowing Machine θ = position, ω = velocity, q = capacitor charge k = spring constant, J = inertia, a = motor constant R = resistance, u = power converter ratio, C = capacitance r = radius, φ = friction 49 / 64

50 Unknown Input Estimation: Rowing Machine System model: ] θ = ω ω = k J θ a2 RJ ω + au RCJ q + r φ(θ, ω) F J J q = au R ω u2 RC q, φ(, ) = 0.12sign(ω) State space model, assuming ω > 0: ẋ = k/j a 2 /RJ au/rcj r/j 0 au/r u 2 /RC 0 x [ w (0, Q), Q = diag q 1, q 2, q 3, q 4 y = Cx + v = x + v [ ] v (0, R), R = diag , , (0.01C) 2 + w (q = CV ) 50 / 64

51 Unknown Input Estimation: Rowing Machine Q = diag([0.01 2, , 1 2, 1 2 ]) - not responsive enough 51 / 64

52 Unknown Input Estimation: Rowing Machine Q = diag([0.01 2, , 1 2, ]) - too responsive 52 / 64

53 Unknown Input Estimation: Rowing Machine Q = diag([0.01 2, , 1 2, ]) - just about right 53 / 64

54 Unknown Input Estimation: Rowing Machine How can we improve our results? We have modeled F as a noisy constant: Instead we can model F as a ramp: F = F v + w 1 F v = w 2 F = w This increases the number of states by 1 but gives the Kalman filter more flexibility to estimate a value for F that matches the measurements RMS force estimation error decreases from 0.8 N to 0.4 N 54 / 64

55 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown Input Estimation The Extended Kalman Filter 55 / 64

56 Nonlinear Kalman Filtering Nonlinear system: ẋ = f (x, u, w, t) y = h(x, v, t) w (0, Q) v (0, R) Linearization: ẋ f (x 0, u 0, w 0, t) + f x (x x 0 ) + f 0 u (u u 0 ) + 0 f w (w w 0 ) 0 = f (x 0, u 0, w 0, t) + A x + B u + L w y h(x 0, v 0, t) + h x (x x 0 ) + h 0 v (v v 0 ) 0 = h(x 0, v 0, t) + C x + M v 56 / 64

57 Nonlinear Kalman Filtering ẋ 0 = f (x 0, u 0, w 0, t) y 0 = h(x 0, v 0, t) ẋ = ẋ x 0 y = y y 0 ẋ = A x + Lw = A x + w w (0, Q), Q = LQL T y = C x + Mv = C x + ṽ ṽ (0, R), R = MRM T We have a linear system with state x and measurement y 57 / 64

58 The Linearized Kalman Filter System equations: Nominal trajectory: ẋ = f (x, u, w, t), w (0, Q) y = h(x, v, t), v (0, R) x 0 = f (x 0, u 0, 0, t), y 0 = h(x 0, 0, t) Compute partial derivative matrices: A = f / x 0, L = f / w 0, C = h/ x 0, M = h/ v 0 Compute Q = LQL T, R = MRM T, y = y y 0 Kalman filter equations: ˆx(0) = 0, P(0) = E [( x(0) ] ˆx(0))( x(0) ˆx(0)) T ˆx = A ˆx + K( y C ˆx), K = PC T R 1 Ṗ = AP + PA T + Q PC T R 1 CP ˆx = x 0 + ˆx 58 / 64

59 The Extended Kalman Filter Combine the ẋ 0 and ˆx equations: ẋ 0 + ˆx = f (x 0, u 0, w 0, t) + A ˆx + K[y y 0 C(ˆx x 0 )] Choose x 0 (t) = ˆx(t), so ˆx(t) = 0 and ˆx(t) = 0 Then the nominal measurement becomes y 0 = h(x 0, v 0, t) = h(ˆx, v 0, t) and the first equation above becomes ˆx = f (ˆx, u, w 0, t) + K [y h(ˆx, v 0, t)] 59 / 64

60 The Extended Kalman Filter System equations: ẋ = f (x, u, w, t), w (0, Q) y = h(x, v, t), v (0, R) Compute partial derivative matrices: A = f / x ˆx, L = f / w ˆx, C = h/ x ˆx, M = h/ v ˆx Compute Q = LQL T, R = MRM T Kalman filter equations: ˆx(0) = E[x(0)], P(0) = E [(x(0) ] ˆx(0))(x(0) ˆx(0)) T ˆx = f (ˆx, u, 0, t) + K [y h(ˆx, 0, t)], K = PC T R 1 Ṗ = AP + PA T + Q PC T R 1 CP 60 / 64

61 Robot State Estimation Robot dynamics: u = M q + C q + g + R + F u = control forces/torques, q = joint coordinates M(q) = mass matrix, C(q, q) = Coriolis matrix g(q) = gravity vector, R( q) = friction vector F (q) = external forces/torques State space model: q = M 1 (u C q g R F ) x = [ ] T [ q 1 q 2 q 3 q 1 q 2 q 3 = x T 1 x2 T [ ] x ẋ = 2 M 1 = f (x, u, w, t) (u C q g R F ) y = q 3 = [ ] x + v = h(x, v, t) The detailed model is available at: dynamicsystems.asmedigitalcollection.asme.org/article.aspx?articleid= ] T 61 / 64

62 Robot State Estimation System equations: ẋ = f (x, u, w, t), w (0, Q) y = h(x, v, t), v (0, R) Compute partial derivative matrices: A = f / x ˆx, L = f / w ˆx, C = h/ x ˆx, M = h/ v ˆx Compute Q = LQL T, R = MRM T Kalman filter equations: ˆx(0) = E[x(0)], P(0) = E [(x(0) ] ˆx(0))(x(0) ˆx(0)) T ˆx = f (ˆx, u, 0, t) + K [y h(ˆx, 0, t)], K = PC T R 1 Ṗ = AP + PA T + Q PC T R 1 CP 62 / 64

63 Robot State Estimation: Robot.zip First we write a simulation for the dynamic system model: simgrf.mdl and statederccfforce.m Then we write a controller: PBimpedanceCCFfull.m Then we calculate the A matrix: CalcFMatrix.m and EvaluateFMatrix.m Then we write a Kalman filter: zhatdot.m Run the program: Run setup.m Run simgrf.mdl Look at the output plots: Run plotter.m to see control performance Open Plotting - 1 meas block to see estimator performance Open the q1, q1hat scope to see hip position Open the q2, q2hat scope to see thigh angle Open the q3meas, q3, q3hat scope to see knee angle 63 / 64

64 Additional Estimation Topics Nonlinear estimation Iterated EKF Second-order EKF Unscented Kalman filter Particle filter Many others... Parameter estimation Smoothing Adaptive filtering Robust filtering (H filtering) Constrained filtering 64 / 64

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

State estimation and the Kalman filter

State estimation and the Kalman filter State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Neural Networks Lecture 10: Fault Detection and Isolation (FDI) Using Neural Networks

Neural Networks Lecture 10: Fault Detection and Isolation (FDI) Using Neural Networks Neural Networks Lecture 10: Fault Detection and Isolation (FDI) Using Neural Networks H.A. Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011.

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m A-AE 567 Final Homework Spring 212 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I A-AE 567 Final Homework Spring 213 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78].

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78]. Chapter 7 Output Feedback There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78]. In the last chapter we considered the use of state feedback to modify the dynamics

More information

Nonlinear Model Predictive Control Tools (NMPC Tools)

Nonlinear Model Predictive Control Tools (NMPC Tools) Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7b, 23 February 2008 c California Institute of Technology All rights reserved. This

More information

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering With Polynomial Chaos Raktim Bhattacharya Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering with PC Problem Setup. Dynamics: ẋ = f(x, ) Sensor Model: ỹ = h(x)

More information

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering Statistical Filtering and Control for AI and Robotics Part II. Linear methods for regression & Kalman filtering Riccardo Muradore 1 / 66 Outline Linear Methods for Regression Gaussian filter Stochastic

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

The Kalman filter. Chapter 6

The Kalman filter. Chapter 6 Chapter 6 The Kalman filter In the last chapter, we saw that in Data Assimilation, we ultimately desire knowledge of the full a posteriori p.d.f., that is the conditional p.d.f. of the state given the

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering. Funded by the NASA Aviation Safety Program June 16, 2003

Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering. Funded by the NASA Aviation Safety Program June 16, 2003 Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering Dan Simon Electrical Engineering Dept. Cleveland State University Cleveland, Ohio Donald L. Simon US Army Research Laboratory

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.1a, January 3, 2010 c California Institute of Technology All rights reserved. This

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks

More information

Lecture 4: Least Squares (LS) Estimation

Lecture 4: Least Squares (LS) Estimation ME 233, UC Berkeley, Spring 2014 Xu Chen Lecture 4: Least Squares (LS) Estimation Background and general solution Solution in the Gaussian case Properties Example Big picture general least squares estimation:

More information

Nonlinear Identification of Backlash in Robot Transmissions

Nonlinear Identification of Backlash in Robot Transmissions Nonlinear Identification of Backlash in Robot Transmissions G. Hovland, S. Hanssen, S. Moberg, T. Brogårdh, S. Gunnarsson, M. Isaksson ABB Corporate Research, Control Systems Group, Switzerland ABB Automation

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Kalman Filters with Uncompensated Biases

Kalman Filters with Uncompensated Biases Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption

More information

Research Article Extended and Unscented Kalman Filtering Applied to a Flexible-Joint Robot with Jerk Estimation

Research Article Extended and Unscented Kalman Filtering Applied to a Flexible-Joint Robot with Jerk Estimation Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 21, Article ID 482972, 14 pages doi:1.1155/21/482972 Research Article Extended and Unscented Kalman Filtering Applied to a

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,

More information

Stability of Parameter Adaptation Algorithms. Big picture

Stability of Parameter Adaptation Algorithms. Big picture ME5895, UConn, Fall 215 Prof. Xu Chen Big picture For ˆθ (k + 1) = ˆθ (k) + [correction term] we haven t talked about whether ˆθ(k) will converge to the true value θ if k. We haven t even talked about

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

A Theoretical Overview on Kalman Filtering

A Theoretical Overview on Kalman Filtering A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering

More information

TSRT14: Sensor Fusion Lecture 6. Kalman Filter (KF) Le 6: Kalman filter (KF), approximations (EKF, UKF) Lecture 5: summary

TSRT14: Sensor Fusion Lecture 6. Kalman Filter (KF) Le 6: Kalman filter (KF), approximations (EKF, UKF) Lecture 5: summary TSRT14 Lecture 6 Gustaf Hendeby Spring 217 1 / 42 Le 6: Kalman filter KF approximations EKF UKF TSRT14: Sensor Fusion Lecture 6 Kalman filter KF KF approximations EKF UKF Gustaf Hendeby hendeby@isyliuse

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

1 Introduction ISSN

1 Introduction ISSN Techset Composition Ltd, Salisbury Doc: {IEE}CTA/Articles/Pagination/CTA58454.3d www.ietdl.org Published in IET Control Theory and Applications Received on 15th January 2009 Revised on 18th May 2009 ISSN

More information

Kalman Filter and Parameter Identification. Florian Herzog

Kalman Filter and Parameter Identification. Florian Herzog Kalman Filter and Parameter Identification Florian Herzog 2013 Continuous-time Kalman Filter In this chapter, we shall use stochastic processes with independent increments w 1 (.) and w 2 (.) at the input

More information

Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering

Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering Cleveland State University EngagedScholarship@CSU Electrical Engineering & Computer Science Faculty Publications Electrical Engineering & Computer Science Department 6-16-2003 Aircraft Turbofan Engine

More information

A Tutorial on Recursive methods in Linear Least Squares Problems

A Tutorial on Recursive methods in Linear Least Squares Problems A Tutorial on Recursive methods in Linear Least Squares Problems by Arvind Yedla 1 Introduction This tutorial motivates the use of Recursive Methods in Linear Least Squares problems, specifically Recursive

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

4 Optimal State Estimation

4 Optimal State Estimation 4 Optimal State Estimation Consider the LTI system ẋ(t) = Ax(t)+B w w(t), y(t) = C y x(t)+d yw w(t), z(t) = C z x(t) Problem: Compute (Â,F) such that the output of the state estimator ˆx(t) = ˆx(t) Fy(t),

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Solutions for examination in Sensor Fusion,

Solutions for examination in Sensor Fusion, Solutions for examination in Sensor Fusion, 17-- 1. (a By conditioning on x k and y k, and the matching noise components w 1,k and w,k, the remaining model becomes ( x l T k+1 = xl k + w l T k, where x

More information

Adaptive ensemble Kalman filtering of nonlinear systems

Adaptive ensemble Kalman filtering of nonlinear systems Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry George Mason University June 12, 213 : Problem Setup We consider a system of the form: x k+1 = f (x k ) + ω k+1 ω N (, Q) y k+1 = h(x

More information

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

State Estimation with a Kalman Filter

State Estimation with a Kalman Filter State Estimation with a Kalman Filter When I drive into a tunnel, my GPS continues to show me moving forward, even though it isn t getting any new position sensing data How does it work? A Kalman filter

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Notes on Random Processes

Notes on Random Processes otes on Random Processes Brian Borchers and Rick Aster October 27, 2008 A Brief Review of Probability In this section of the course, we will work with random variables which are denoted by capital letters,

More information

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

Sequential State Estimation (Crassidas and Junkins, Chapter 5) Sequential State Estimation (Crassidas and Junkins, Chapter 5) Please read: 5.1, 5.3-5.6 5.3 The Discrete-Time Kalman Filter The discrete-time Kalman filter is used when the dynamics and measurements are

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator Kalman Filtering Namrata Vaswani March 29, 2018 Notes are based on Vincent Poor s book. 1 Kalman Filter as a causal MMSE estimator Consider the following state space model (signal and observation model).

More information

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution ECE 54 Stochastic Signals and Systems Problem Set Solution Problem Solutions : Yates and Goodman,..4..7.3.3.4.3.8.3 and.8.0 Problem..4 Solution Since E[Y (t] R Y (0, we use Theorem.(a to evaluate R Y (τ

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS ECE5550: Applied Kalman Filtering 9 1 SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS 9.1: Parameters versus states Until now, we have assumed that the state-space model of the system

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

Topic # Feedback Control

Topic # Feedback Control Topic #7 16.31 Feedback Control State-Space Systems What are state-space models? Why should we use them? How are they related to the transfer functions used in classical control design and how do we develop

More information

Comparison of four state observer design algorithms for MIMO system

Comparison of four state observer design algorithms for MIMO system Archives of Control Sciences Volume 23(LIX), 2013 No. 2, pages 131 144 Comparison of four state observer design algorithms for MIMO system VINODH KUMAR. E, JOVITHA JEROME and S. AYYAPPAN A state observer

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes Klaus Witrisal witrisal@tugraz.at Signal Processing and Speech Communication Laboratory www.spsc.tugraz.at Graz University of

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

CHAPTER 3: STATE ESTIMATION

CHAPTER 3: STATE ESTIMATION CHAPTER 3: STATE ESTIMATION Overview State estimation is the process of determining the internal state of an energy system, from measurements of the input/output data. The algorithm is typically computer

More information

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

the robot in its current estimated position and orientation (also include a point at the reference point of the robot) CSCI 4190 Introduction to Robotic Algorithms, Spring 006 Assignment : out February 13, due February 3 and March Localization and the extended Kalman filter In this assignment, you will write a program

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Machine learning - HT Maximum Likelihood

Machine learning - HT Maximum Likelihood Machine learning - HT 2016 3. Maximum Likelihood Varun Kanade University of Oxford January 27, 2016 Outline Probabilistic Framework Formulate linear regression in the language of probability Introduce

More information

Lecture 9 Nonlinear Control Design

Lecture 9 Nonlinear Control Design Lecture 9 Nonlinear Control Design Exact-linearization Lyapunov-based design Lab 2 Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.2] and [Glad-Ljung,ch.17] Course Outline

More information

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007 Robotics & Automation Lecture 25 Dynamics of Constrained Systems, Dynamic Control John T. Wen April 26, 2007 Last Time Order N Forward Dynamics (3-sweep algorithm) Factorization perspective: causal-anticausal

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Multivariate probability distributions and linear regression

Multivariate probability distributions and linear regression Multivariate probability distributions and linear regression Patrik Hoyer 1 Contents: Random variable, probability distribution Joint distribution Marginal distribution Conditional distribution Independence,

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

Infinite Horizon LQ. Given continuous-time state equation. Find the control function u(t) to minimize

Infinite Horizon LQ. Given continuous-time state equation. Find the control function u(t) to minimize Infinite Horizon LQ Given continuous-time state equation x = Ax + Bu Find the control function ut) to minimize J = 1 " # [ x T t)qxt) + u T t)rut)] dt 2 0 Q $ 0, R > 0 and symmetric Solution is obtained

More information

Dynamics. Basilio Bona. Semester 1, DAUIN Politecnico di Torino. B. Bona (DAUIN) Dynamics Semester 1, / 18

Dynamics. Basilio Bona. Semester 1, DAUIN Politecnico di Torino. B. Bona (DAUIN) Dynamics Semester 1, / 18 Dynamics Basilio Bona DAUIN Politecnico di Torino Semester 1, 2016-17 B. Bona (DAUIN) Dynamics Semester 1, 2016-17 1 / 18 Dynamics Dynamics studies the relations between the 3D space generalized forces

More information

Open Economy Macroeconomics: Theory, methods and applications

Open Economy Macroeconomics: Theory, methods and applications Open Economy Macroeconomics: Theory, methods and applications Lecture 4: The state space representation and the Kalman Filter Hernán D. Seoane UC3M January, 2016 Today s lecture State space representation

More information

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else ECE 450 Homework #3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3 4 5

More information

EE363 homework 5 solutions

EE363 homework 5 solutions EE363 Prof. S. Boyd EE363 homework 5 solutions 1. One-step ahead prediction of an autoregressive time series. We consider the following autoregressive (AR) system p t+1 = αp t + βp t 1 + γp t 2 + w t,

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs Engineering Mathematics 08 SUBJECT NAME : Probability & Random Processes SUBJECT CODE : MA645 MATERIAL NAME : University Questions REGULATION : R03 UPDATED ON : November 07 (Upto N/D 07 Q.P) (Scan the

More information

1. Type your solutions. This homework is mainly a programming assignment.

1. Type your solutions. This homework is mainly a programming assignment. THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K S # 6 + 7 Ahmad F. Taha October 22, 2015 READ Homework Instructions: 1. Type your solutions. This homework

More information