The Kalman Filter ImPr Talk

Size: px
Start display at page:

Download "The Kalman Filter ImPr Talk"

Transcription

1 The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006

2 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman Filter Equations Extensions

3 What is the Kalman Filter for? State estimation in Dynamical Systems

4 What is the Kalman Filter for? State estimation in Dynamical Systems In particular for discrete-time models with continuous, possibly hidden, state

5 What is the Kalman Filter for? State estimation in Dynamical Systems In particular for discrete-time models with continuous, possibly hidden, state It is optimal for Linear Gaussian systems

6 Why filter? The KF is a state-space filter

7 Why filter? The KF is a state-space filter Noisy measurements cleaned state estimate Estimates are updated online with each new measurement

8 Why filter? The KF is a state-space filter Noisy measurements cleaned state estimate Estimates are updated online with each new measurement Equivalent to an (optimally) adaptive low-pass (IIR) filter

9 Why Kalman? (!) R. E. Kalman A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35-45, 1960.

10 Why Kalman? (!) R. E. Kalman A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35-45, Rudolf Emil Kalman, born in Budapest, Hungary, May 19, 1930.

11 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman Filter Equations Extensions

12 State space examples - dynamical system Motion of a particle (in 1D) constant (but noisy) acceleration

13 State space examples - dynamical system Motion of a particle (in 1D) constant (but noisy) acceleration s k = s k 1 + ṡ k 1 T s k 1 ( T ) 2 ṡ k = ṡ k 1 + s k 1 T s k = s k 1

14 State space examples - dynamical system Motion of a particle (in 1D) constant (but noisy) acceleration s k = s k 1 + ṡ k 1 T s k 1 ( T ) 2 + ɛ k ṡ k = ṡ k 1 + s k 1 T + ɛ k s k = s k 1 + ɛ k

15 State space examples - dynamical system Motion of a particle (in 1D) constant (but noisy) acceleration s k = s k 1 + ṡ k 1 T s k 1 ( T ) 2 + ɛ k ṡ k = ṡ k 1 + s k 1 T + ɛ k s k = s k 1 + ɛ k s k 1 T ( T ) 2 /2 ṡ k = 0 1 T s k s k 1 ṡ k 1 s k 1 + w k

16 State space examples - dynamical system Motion of a particle (in 1D) constant (but noisy) acceleration s k = s k 1 + ṡ k 1 T s k 1 ( T ) 2 + ɛ k ṡ k = ṡ k 1 + s k 1 T + ɛ k s k = s k 1 + ɛ k s k 1 T ( T ) 2 /2 ṡ k = 0 1 T s k s k 1 ṡ k 1 s k 1 + w k x k = Fx k 1 + w k

17 State space examples - dynamical system Motion of a particle (in 1D) constant (but noisy) acceleration s k = s k 1 + ṡ k 1 T s k 1 ( T ) 2 + ɛ k ṡ k = ṡ k 1 + s k 1 T + ɛ k s k = s k 1 + ɛ k s k 1 T ( T ) 2 /2 ṡ k = 0 1 T s k s k 1 ṡ k 1 s k 1 + w k x k = Fx k 1 + w k w k N (w k 0, Q)

18 State space examples - time-series

19 State space examples - time-series Autoregressive AR(n) model s k s k 1 s k 2. s k (n 1) = a 1 a 2 a 3... a n s k 1 s k 2 s k 3. s k n + ɛ k

20 State space examples - time-series Autoregressive AR(n) model s k a 1 a 2 a 3... a n s k s k 2 = s k (n 1) s k 1 s k 2 s k 3. s k n ɛ k σ x k = Fx k 1 + w k Q k =

21 The state space model x k R n z k R m u k R p vector of hidden state variables vector of observations vector of control inputs Linear x k = F k x k 1 + B k u k z k = H k x k (1)

22 The state space model x k R n z k R m u k R p vector of hidden state variables vector of observations vector of control inputs Linear Gaussian x k = F k x k 1 + B k u k + w k z k = H k x k + v k w k N (w k 0, Q k ) v k N (v k 0, R k ) (1)

23 The state space model

24 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman Filter Equations Extensions

25 How does the KF work? Recursive (online) updating of current state estimate and precision (covariance)

26 How does the KF work? Recursive (online) updating of current state estimate and precision (covariance) 1. Using the system model

27 How does the KF work? Recursive (online) updating of current state estimate and precision (covariance) 1. Using the system model Predict the next state Estimate the new (reduced) precision

28 How does the KF work? Recursive (online) updating of current state estimate and precision (covariance) 1. Using the system model Predict the next state Estimate the new (reduced) precision 2. Using the new measurement

29 How does the KF work? Recursive (online) updating of current state estimate and precision (covariance) 1. Using the system model Predict the next state Estimate the new (reduced) precision 2. Using the new measurement Correct the state estimate Update the (increased) precision

30 How is it derived? Two equivalent alternatives

31 How is it derived? Two equivalent alternatives 1. Classical

32 How is it derived? Two equivalent alternatives 1. Classical Show form of Kalman gain update is unbiased Derive expressions for posterior covariance (MSE) Analytically find optimal Kalman gain

33 How is it derived? Two equivalent alternatives 1. Classical Show form of Kalman gain update is unbiased Derive expressions for posterior covariance (MSE) Analytically find optimal Kalman gain 2. Bayesian

34 How is it derived? Two equivalent alternatives 1. Classical Show form of Kalman gain update is unbiased Derive expressions for posterior covariance (MSE) Analytically find optimal Kalman gain 2. Bayesian Treat previous estimates as prior Measurement model gives likelihood Derive posterior distribution

35 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman Filter Equations Extensions

36 Bayesian updating A simple example p(x) = N ( ) x ˆx, σw 2 prior z = x + v v N (0, σv) 2 ( ) p(z x) = N z x, σv 2 likelihood

37 Bayesian updating A simple example p(x) = N ( ) x ˆx, σw 2 prior z = x + v v N (0, σv) 2 ( ) p(z x) = N z x, σv 2 likelihood p(x z) p(z x) p(x) from Bayes

38 Bayesian updating p(z x) p(x) = N ( ) ( ) z x, σv 2 N x ˆx, σw 2

39 Bayesian updating p(z x) p(x) = N ( ) ( ) z x, σv 2 N x ˆx, σw 2 log (p(z x) p(x)) = (z x)2 σ 2 v + (x ˆx)2 σ 2 w +

40 Bayesian updating p(z x) p(x) = N log (p(z x) p(x)) = ( ) ( ) z x, σv 2 N x ˆx, σw 2 (z x)2 σ 2 v = x 2 ( 1 σ 2 v (x ˆx)2 + σw ) ( z σw 2 2x σv 2 + ˆx ) σw 2 +

41 Bayesian updating p(z x) p(x) = N log (p(z x) p(x)) = ( ) ( ) z x, σv 2 N x ˆx, σw 2 (z x)2 σ 2 v (x ˆx)2 + σw 2 + ( 1 = x 2 σv ) ( z σw 2 2x σv 2 = x 2 /a 2bx + + ˆx ) σw 2 +

42 Bayesian updating p(z x) p(x) = N log (p(z x) p(x)) = ( ) ( ) z x, σv 2 N x ˆx, σw 2 (z x)2 σ 2 v (x ˆx)2 + σw 2 + ( 1 = x 2 σv ) ( z σw 2 2x σv 2 = x 2 /a 2bx + = 1 a (x ab)2 + + ˆx ) σw 2 +

43 Bayesian updating (Information form) log (p(x z)) 1 a (x ab)2 +

44 Bayesian updating (Information form) log (p(x z)) 1 a (x ab)2 + p(x z) = N (x ab, a) where 1 a = 1 σv σw 2 b = z σv 2 + ˆx σw 2

45 Bayesian updating (Information form) log (p(x z)) 1 a (x ab)2 + p(x z) = N (x ab, a) where 1 a = 1 σv σw 2 b = z σv 2 + ˆx σw 2 Note prior and measurement information add the posterior mean is the information-weighted mean

46 Bayesian updating (Information form) Summary 1 a = 1 σv 2 posterior cov = a ( z posterior mean = a + 1 σ 2 w σ 2 v + ˆx ) σw 2 (2)

47 Bayesian updating (Kalman-gain form) Now rearrange, for reasons that might become clear later... ( 1 a = σ 2 v + 1 ) 1 σw 2

48 Bayesian updating (Kalman-gain form) Now rearrange, for reasons that might become clear later... ( 1 a = σ 2 v + 1 ) 1 σw 2 = σ2 wσ 2 v σ 2 w + σ 2 v = σ2 w(σ 2 w + σ 2 v) σ 4 w σ 2 w + σ 2 v = σ 2 w σ4 w σ 2 v + σ 2 w

49 Bayesian updating (Kalman-gain form) Now rearrange, for reasons that might become clear later... ( 1 a = σ 2 v + 1 ) 1 σw 2 = σ2 wσ 2 v σ 2 w + σ 2 v = σ2 w(σ 2 w + σ 2 v) σ 4 w σ 2 w + σ 2 v = σ 2 w σ4 w σ 2 v + σ 2 w = (1 k)σ 2 w σ2 w k = σv 2 + σw 2

50 Bayesian updating (Kalman-gain form) Bear with me... a = (1 k)σ 2 w σ2 w k = σw 2 + σv 2

51 Bayesian updating (Kalman-gain form) Bear with me... a = (1 k)σ 2 w k = (1 k) = σ2 w σ 2 w + σ 2 v σ 2 v σ 2 w + σ 2 v = k σ2 v σ 2 w

52 Bayesian updating (Kalman-gain form) Bear with me... a = (1 k)σ 2 w k = σ2 w σ 2 w + σ 2 v σ 2 v (1 k) = σw 2 + σv 2 = k σ2 v σw ( 2 z ab = (1 k)σw 2 σv 2 + ˆx ) σw 2 = (1 k)σw 2 z + (1 k)ˆx σ 2 v = kz + ˆx k ˆx = ˆx + k(z ˆx)

53 Bayesian updating (Kalman-gain form) Summary σ2 w k = σw 2 + σv 2 posterior cov = (1 k)σ 2 w posterior mean = ˆx + k(z ˆx) (3)

54 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman Filter Equations Extensions

55 Derivation outline The prediction step is straight-forward. From the model x k = F k x k 1 + B k u k + w k

56 Derivation outline The prediction step is straight-forward. From the model x k = F k x k 1 + B k u k + w k ˆx k k 1 = F k ˆx k 1 k 1 + B k u k

57 Derivation outline The prediction step is straight-forward. From the model x k = F k x k 1 + B k u k + w k ˆx k k 1 = F k ˆx k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q k

58 Derivation outline The prediction step is straight-forward. From the model x k = F k x k 1 + B k u k + w k ˆx k k 1 = F k ˆx k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q k This gives us the time-k prior mean and covariance in terms of the time-(k 1) posteriors.

59 Derivation outline The correction step is algebraically tricky, but conceptually very similar to the simple example of Bayesian updating. (5)

60 Derivation outline The correction step is algebraically tricky, but conceptually very similar to the simple example of Bayesian updating. p(x k ˆx k k 1, z k ) p(z k x k, ˆx k k 1 ) p(x k ˆx k k 1 ) (5)

61 Derivation outline The correction step is algebraically tricky, but conceptually very similar to the simple example of Bayesian updating. p(x k ˆx k k 1, z k ) p(z k x k, ˆx k k 1 ) p(x k ˆx k k 1 ) p(z k x k ) p(x k ˆx k k 1 ) (5)

62 Derivation outline The correction step is algebraically tricky, but conceptually very similar to the simple example of Bayesian updating. p(x k ˆx k k 1, z k ) p(z k x k, ˆx k k 1 ) p(x k ˆx k k 1 ) p(z k x k ) p(x k ˆx k k 1 ) N (z k H k x k, R k ) N ( x k ˆx k k 1, P k k 1 ) (4) (5)

63 Derivation outline The correction step is algebraically tricky, but conceptually very similar to the simple example of Bayesian updating. p(x k ˆx k k 1, z k ) p(z k x k, ˆx k k 1 ) p(x k ˆx k k 1 ) p(z k x k ) p(x k ˆx k k 1 ) N (z k H k x k, R k ) N ( x k ˆx k k 1, P k k 1 ) p(x k ˆx k k 1, z k ) = N ( x k ˆx k k, P k k ) (4) (5)

64 Derivation outline The mean and covariance of the time-k posterior in (5) can be derived in information form by completing the square of the quadratic form in x that arises from (4).

65 Derivation outline Rearranging into Kalman-gain form then requires some fiddly algebra and the following two matrix inversion identities

66 Derivation outline Rearranging into Kalman-gain form then requires some fiddly algebra and the following two matrix inversion identities (A + BCD) 1 = A 1 A 1 B(C 1 + DA 1 B) 1 DA 1 (6) (A + BCD) 1 BC = A 1 B(C 1 + DA 1 B) 1 (7)

67 Results (information form) ˆx k k 1 = F k ˆx k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q k

68 Results (information form) ˆx k k 1 = F k ˆx k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q k P 1 k k = P 1 k k 1 + HT k R 1 k H k ˆx k k = P k k (P 1 k k 1ˆx k k 1 + H T k R 1 k z k ) (8) (Compare (2) from before)

69 Results (Kalman-gain form) ˆx k k 1 = F k ˆx k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q k

70 Results (Kalman-gain form) ˆx k k 1 = F k ˆx k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q k K = P k k 1 H T k ( ) 1 R k + H k P k k 1 Hk T P k k = (I KH k )P k k 1 ˆx k k = ˆx k k 1 + K ( z k H k ˆx k k 1 ) (9) (Compare (3), and see kalman update.m)

71 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman Filter Equations Extensions

72 Prediction & Smoothing The model allows us to predict ahead of the measurements, at each iteration we maintain the optimal estimate of the future state given the data we have so far.

73 Prediction & Smoothing The model allows us to predict ahead of the measurements, at each iteration we maintain the optimal estimate of the future state given the data we have so far. We can also estimate the state at a point before the current measurement known as fixed lag smoothing.

74 Prediction & Smoothing The model allows us to predict ahead of the measurements, at each iteration we maintain the optimal estimate of the future state given the data we have so far. We can also estimate the state at a point before the current measurement known as fixed lag smoothing. If we first collect all the data and want to process it offline ( fixed interval smoothing ), then a closely related algorithm can be derived.

75 Prediction & Smoothing The model allows us to predict ahead of the measurements, at each iteration we maintain the optimal estimate of the future state given the data we have so far. We can also estimate the state at a point before the current measurement known as fixed lag smoothing. If we first collect all the data and want to process it offline ( fixed interval smoothing ), then a closely related algorithm can be derived. The Rauch Tung Striebel Smoother (or Kalman Smoother) uses a forward-backward algorithm to achieve optimal offline smoothing with two passes of a Kalman-like recursion. (see kalman smoother.m).

76 Nonlinearity For non-linear state space models x k = f k (x k 1, u k ) + w k z k = h k (x k ) + v k

77 Nonlinearity For non-linear state space models x k = f k (x k 1, u k ) + w k z k = h k (x k ) + v k Either, repeatedly linearise about the current point (EKF)

78 Nonlinearity For non-linear state space models x k = f k (x k 1, u k ) + w k z k = h k (x k ) + v k Either, repeatedly linearise about the current point (EKF) Or, non-linearly propagate (deterministic) samples and re-estimate the Gaussian mean and covariance (UKF)

79 Non-Gaussianity If the process or measurement noise isn t normally distributed, then for the correction step p(x k ˆx k k 1, z k ) p(z k x k ) p(x k ˆx k k 1 ) we can sample from the distribution using Markov Chain Monte Carlo methods.

80 Non-Gaussianity If the process or measurement noise isn t normally distributed, then for the correction step p(x k ˆx k k 1, z k ) p(z k x k ) p(x k ˆx k k 1 ) we can sample from the distribution using Markov Chain Monte Carlo methods. This is known as Sequential Monte Carlo (SMC) or Particle Filtering, and is a very powerful technique. See, for example, the Condensation algorithm in computer vision.

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Dual Estimation and the Unscented Transformation

Dual Estimation and the Unscented Transformation Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology

More information

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Organization. I MCMC discussion. I project talks. I Lecture.

Organization. I MCMC discussion. I project talks. I Lecture. Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

Local Positioning with Parallelepiped Moving Grid

Local Positioning with Parallelepiped Moving Grid Local Positioning with Parallelepiped Moving Grid, WPNC06 16.3.2006, niilo.sirola@tut.fi p. 1/?? TA M P E R E U N I V E R S I T Y O F T E C H N O L O G Y M a t h e m a t i c s Local Positioning with Parallelepiped

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

An introduction to particle filters

An introduction to particle filters An introduction to particle filters Andreas Svensson Department of Information Technology Uppsala University June 10, 2014 June 10, 2014, 1 / 16 Andreas Svensson - An introduction to particle filters Outline

More information

Content.

Content. Content Fundamentals of Bayesian Techniques (E. Sucar) Bayesian Filters (O. Aycard) Definition & interests Implementations Hidden Markov models Discrete Bayesian Filters or Markov localization Kalman filters

More information

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Towards a Bayesian model for Cyber Security

Towards a Bayesian model for Cyber Security Towards a Bayesian model for Cyber Security Mark Briers (mbriers@turing.ac.uk) Joint work with Henry Clausen and Prof. Niall Adams (Imperial College London) 27 September 2017 The Alan Turing Institute

More information

Crib Sheet : Linear Kalman Smoothing

Crib Sheet : Linear Kalman Smoothing Crib Sheet : Linear Kalman Smoothing Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Introduction Smoothing can be separated

More information

PROBABILISTIC REASONING OVER TIME

PROBABILISTIC REASONING OVER TIME PROBABILISTIC REASONING OVER TIME In which we try to interpret the present, understand the past, and perhaps predict the future, even when very little is crystal clear. Outline Time and uncertainty Inference:

More information

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents Navtech Part #s Volume 1 #1277 Volume 2 #1278 Volume 3 #1279 3 Volume Set #1280 Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents Volume 1 Preface Contents

More information

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object

More information

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Rao-Blackwellized Particle Filter for Multiple Target Tracking Rao-Blackwellized Particle Filter for Multiple Target Tracking Simo Särkkä, Aki Vehtari, Jouko Lampinen Helsinki University of Technology, Finland Abstract In this article we propose a new Rao-Blackwellized

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

Linear Dynamical Systems (Kalman filter)

Linear Dynamical Systems (Kalman filter) Linear Dynamical Systems (Kalman filter) (a) Overview of HMMs (b) From HMMs to Linear Dynamical Systems (LDS) 1 Markov Chains with Discrete Random Variables x 1 x 2 x 3 x T Let s assume we have discrete

More information

SEQUENTIAL MONTE CARLO METHODS WITH APPLICATIONS TO COMMUNICATION CHANNELS. A Thesis SIRISH BODDIKURAPATI

SEQUENTIAL MONTE CARLO METHODS WITH APPLICATIONS TO COMMUNICATION CHANNELS. A Thesis SIRISH BODDIKURAPATI SEQUENTIAL MONTE CARLO METHODS WITH APPLICATIONS TO COMMUNICATION CHANNELS A Thesis by SIRISH BODDIKURAPATI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of

More information

Detection and Estimation Theory

Detection and Estimation Theory Detection and Estimation Theory Instructor: Prof. Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/ namrata Slide 1 What is Estimation and Detection

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

Fast Sequential Monte Carlo PHD Smoothing

Fast Sequential Monte Carlo PHD Smoothing 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 11 Fast Sequential Monte Carlo PHD Smoothing Sharad Nagappa and Daniel E. Clark School of EPS Heriot Watt University

More information

Short tutorial on data assimilation

Short tutorial on data assimilation Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum

More information

Probing the covariance matrix

Probing the covariance matrix Probing the covariance matrix Kenneth M. Hanson Los Alamos National Laboratory (ret.) BIE Users Group Meeting, September 24, 2013 This presentation available at http://kmh-lanl.hansonhub.com/ LA-UR-06-5241

More information

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic

More information

Forecasting and data assimilation

Forecasting and data assimilation Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics

More information

Optimal State Estimation for Boolean Dynamical Systems using a Boolean Kalman Smoother

Optimal State Estimation for Boolean Dynamical Systems using a Boolean Kalman Smoother Optimal State Estimation for Boolean Dynamical Systems using a Boolean Kalman Smoother Mahdi Imani and Ulisses Braga-Neto Department of Electrical and Computer Engineering Texas A&M University College

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

Introduction to Probabilistic Graphical Models: Exercises

Introduction to Probabilistic Graphical Models: Exercises Introduction to Probabilistic Graphical Models: Exercises Cédric Archambeau Xerox Research Centre Europe cedric.archambeau@xrce.xerox.com Pascal Bootcamp Marseille, France, July 2010 Exercise 1: basics

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Machine Learning for OR & FE

Machine Learning for OR & FE Machine Learning for OR & FE Hidden Markov Models Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Additional References: David

More information

A Time-Varying Threshold STAR Model of Unemployment

A Time-Varying Threshold STAR Model of Unemployment A Time-Varying Threshold STAR Model of Unemloyment michael dueker a michael owyang b martin sola c,d a Russell Investments b Federal Reserve Bank of St. Louis c Deartamento de Economia, Universidad Torcuato

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen, Ocean Dynamics, Vol 5, No p. The Ensemble

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Graphical models for statistical inference and data assimilation

Graphical models for statistical inference and data assimilation Physica D ( ) www.elsevier.com/locate/physd Graphical models for statistical inference and data assimilation Alexander T. Ihler a,, Sergey Kirshner b, Michael Ghil c,d, Andrew W. Robertson e, Padhraic

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

Graphical Models for Statistical Inference and Data Assimilation

Graphical Models for Statistical Inference and Data Assimilation Graphical Models for Statistical Inference and Data Assimilation Alexander T. Ihler a Sergey Kirshner a Michael Ghil b,c Andrew W. Robertson d Padhraic Smyth a a Donald Bren School of Information and Computer

More information

State Estimation Based on Nested Particle Filters

State Estimation Based on Nested Particle Filters Cleveland State University EngagedScholarship@CSU ETD Archive 2013 State Estimation Based on Nested Particle Filters Swathi Srinivasan Cleveland State University How does access to this work benefit you?

More information

A Theoretical Overview on Kalman Filtering

A Theoretical Overview on Kalman Filtering A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Spall John Wiley and Sons, Inc., 2003 Preface... xiii 1. Stochastic Search

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Particle lter for mobile robot tracking and localisation

Particle lter for mobile robot tracking and localisation Particle lter for mobile robot tracking and localisation Tinne De Laet K.U.Leuven, Dept. Werktuigkunde 19 oktober 2005 Particle lter 1 Overview Goal Introduction Particle lter Simulations Particle lter

More information

Kalman Filter. Man-Wai MAK

Kalman Filter. Man-Wai MAK Kalman Filter Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S. Gannot and A. Yeredor,

More information

Part 1: Expectation Propagation

Part 1: Expectation Propagation Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud

More information

Imprecise Filtering for Spacecraft Navigation

Imprecise Filtering for Spacecraft Navigation Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem

More information

Graphical Models for Statistical Inference and Data Assimilation

Graphical Models for Statistical Inference and Data Assimilation Graphical Models for Statistical Inference and Data Assimilation Alexander T. Ihler a Sergey Kirshner b Michael Ghil c,d Andrew W. Robertson e Padhraic Smyth a a Donald Bren School of Information and Computer

More information

Markov chain Monte Carlo methods for visual tracking

Markov chain Monte Carlo methods for visual tracking Markov chain Monte Carlo methods for visual tracking Ray Luo rluo@cory.eecs.berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot

More information

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models Volume 30, Issue 3 A note on Kalman filter approach to solution of rational expectations models Marco Maria Sorge BGSE, University of Bonn Abstract In this note, a class of nonlinear dynamic models under

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

Temporal probability models. Chapter 15, Sections 1 5 1

Temporal probability models. Chapter 15, Sections 1 5 1 Temporal probability models Chapter 15, Sections 1 5 Chapter 15, Sections 1 5 1 Outline Time and uncertainty Inference: filtering, prediction, smoothing Hidden Markov models Kalman filters (a brief mention)

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

A new unscented Kalman filter with higher order moment-matching

A new unscented Kalman filter with higher order moment-matching A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This

More information

Bayesian tracking of multiple point targets using expectation maximization

Bayesian tracking of multiple point targets using expectation maximization 450 400 350 300 p y (m) 250 200 150 100 50 0 Measurement data True tracks Track estimates 50 100 150 200 250 300 350 400 450 500 p x (m) Bayesian tracking of multiple point targets using expectation maximization

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise I. Bilik 1 and J. Tabrikian 2 1 Dept. of Electrical and Computer Engineering, University

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

Expectation Propagation in Dynamical Systems

Expectation Propagation in Dynamical Systems Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex

More information

Statistics Homework #4

Statistics Homework #4 Statistics 910 1 Homework #4 Chapter 6, Shumway and Stoffer These are outlines of the solutions. If you would like to fill in other details, please come see me during office hours. 6.1 State-space representation

More information

State Space Gaussian Processes with Non-Gaussian Likelihoods

State Space Gaussian Processes with Non-Gaussian Likelihoods State Space Gaussian Processes with Non-Gaussian Likelihoods Hannes Nickisch 1 Arno Solin 2 Alexander Grigorievskiy 2,3 1 Philips Research, 2 Aalto University, 3 Silo.AI ICML2018 July 13, 2018 Outline

More information