Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD

Size: px
Start display at page:

Download "Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD"

Transcription

1 Feedback Particle Filter and its Application to Coupled Oscillators Presentation at University of Maryland, College Park, MD Prashant Mehta Dept. of Mechanical Science and Engineering and the Coordinated Science Laboratory University of Illinois at Urbana-Champaign May 1, 2015

2 Bayesian Inference/Filtering Mathematics of prediction: Bayes rule Signal (hidden): X X P(X), (prior, known) Feedback Particle Filter Prashant Mehta 2 / 34

3 Bayesian Inference/Filtering Mathematics of prediction: Bayes rule Signal (hidden): X X P(X), (prior, known) Observation: Y (known) Feedback Particle Filter Prashant Mehta 2 / 34

4 Bayesian Inference/Filtering Mathematics of prediction: Bayes rule Signal (hidden): X X P(X), (prior, known) Observation: Y (known) Observation model: P(Y X) (known) Feedback Particle Filter Prashant Mehta 2 / 34

5 Bayesian Inference/Filtering Mathematics of prediction: Bayes rule Signal (hidden): X X P(X), (prior, known) Observation: Y (known) Observation model: P(Y X) (known) Problem: What is X? Feedback Particle Filter Prashant Mehta 2 / 34

6 Bayesian Inference/Filtering Mathematics of prediction: Bayes rule Signal (hidden): X X P(X), (prior, known) Observation: Y (known) Observation model: P(Y X) (known) Problem: What is X? Solution Bayes rule: P(X Y) P(Y X) P(X) }{{}}{{} Posterior Prior Feedback Particle Filter Prashant Mehta 2 / 34

7 Bayesian Inference/Filtering Mathematics of prediction: Bayes rule Signal (hidden): X X P(X), (prior, known) Observation: Y (known) Observation model: P(Y X) (known) Problem: What is X? Solution Bayes rule: P(X Y) P(Y X) P(X) }{{}}{{} Posterior Prior This talk is about implementing Bayes rule in dynamic, nonlinear, non-gaussian settings! Feedback Particle Filter Prashant Mehta 2 / 34

8 Applications Target state estimation Feedback Particle Filter Prashant Mehta 3 / 34

9 Applications Target state estimation Feedback Particle Filter Prashant Mehta 3 / 34

10 Applications Target state estimation Feedback Particle Filter Prashant Mehta 3 / 34

11 Applications Target state estimation Feedback Particle Filter Prashant Mehta 3 / 34

12 Applications Bayesian model of sensory signal processing Feedback Particle Filter Prashant Mehta 3 / 34

13 Nonlinear Filtering Mathematical Problem Signal model: dx t = a(x t )dt + db t, X 0 p 0 ( ) A. Bain and D. Crisan, Fundamentals of Stochastic Filtering. Springer, Feedback Particle Filter Prashant Mehta 4 / 34

14 Nonlinear Filtering Mathematical Problem Signal model: dx t = a(x t )dt + db t, X 0 p 0 ( ) Observation model: dz t = h(x t )dt + dw t A. Bain and D. Crisan, Fundamentals of Stochastic Filtering. Springer, Feedback Particle Filter Prashant Mehta 4 / 34

15 Nonlinear Filtering Mathematical Problem Signal model: dx t = a(x t )dt + db t, X 0 p 0 ( ) Observation model: Problem: dz t = h(x t )dt + dw t What is X t? given obs. till time t =: Z t A. Bain and D. Crisan, Fundamentals of Stochastic Filtering. Springer, Feedback Particle Filter Prashant Mehta 4 / 34

16 Nonlinear Filtering Mathematical Problem Signal model: dx t = a(x t )dt + db t, X 0 p 0 ( ) Observation model: Problem: Answer in terms of posterior: dz t = h(x t )dt + dw t What is X t? given obs. till time t =: Z t P(X t Z t ) =: p (x,t). A. Bain and D. Crisan, Fundamentals of Stochastic Filtering. Springer, Feedback Particle Filter Prashant Mehta 4 / 34

17 Nonlinear Filtering Mathematical Problem Signal model: dx t = a(x t )dt + db t, X 0 p 0 ( ) Observation model: Problem: Answer in terms of posterior: dz t = h(x t )dt + dw t What is X t? given obs. till time t =: Z t P(X t Z t ) =: p (x,t). A. Bain and D. Crisan, Fundamentals of Stochastic Filtering. Springer, Feedback Particle Filter Prashant Mehta 4 / 34

18 Nonlinear Filtering Mathematical Problem Signal model: dx t = a(x t )dt + db t, X 0 p 0 ( ) Observation model: Problem: Answer in terms of posterior: dz t = h(x t )dt + dw t What is X t? given obs. till time t =: Z t P(X t Z t ) =: p (x,t). Posterior is an information state P(X t A Z t ) = p (x,t)dx A E(X t Z t ) = xp (x,t)dx R A. Bain and D. Crisan, Fundamentals of Stochastic Filtering. Springer, Feedback Particle Filter Prashant Mehta 4 / 34

19 Kalman filter Solution in linear Gaussian settings dx t = αx t dt + db t (1) dz t = γx t dt + dw t (2) R. E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng. (1961); R. E. Kalman and R. S. Bucy. New Results in Liner Filtering and Prediction Theory. J. Basic Eng. (1961). Feedback Particle Filter Prashant Mehta 5 / 34

20 Kalman filter Solution in linear Gaussian settings dx t = αx t dt + db t (1) dz t = γx t dt + dw t (2) Kalman filter: p = N( ˆX t,σ t ) d ˆX t = α ˆX t dt + K(dZ t γ ˆX t dt) }{{} Update R. E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng. (1961); R. E. Kalman and R. S. Bucy. New Results in Liner Filtering and Prediction Theory. J. Basic Eng. (1961). Feedback Particle Filter Prashant Mehta 5 / 34

21 Kalman filter Solution in linear Gaussian settings Kalman filter: p = N( ˆX t,σ t ) dx t = αx t dt + db t (1) dz t = γx t dt + dw t (2) d ˆX t = α ˆX t dt + K(dZ t γ ˆX t dt) }{{} Update - + Kalman Filter R. E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng. (1961); R. E. Kalman and R. S. Bucy. New Results in Liner Filtering and Prediction Theory. J. Basic Eng. (1961). Feedback Particle Filter Prashant Mehta 5 / 34

22 Kalman filter Solution in linear Gaussian settings Kalman filter: p = N( ˆX t,σ t ) d ˆX t = α ˆX t dt + K(dZ t γ ˆX t dt) }{{} Update dx t = αx t dt + db t (1) dz t = γx t dt + dw t (2) Kalman Filter Observation: dz t = γx t dt + dw t - + Kalman Filter R. E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng. (1961); R. E. Kalman and R. S. Bucy. New Results in Liner Filtering and Prediction Theory. J. Basic Eng. (1961). Feedback Particle Filter Prashant Mehta 5 / 34

23 Kalman filter Solution in linear Gaussian settings Kalman filter: p = N( ˆX t,σ t ) d ˆX t = α ˆX t dt + K(dZ t γ ˆX t dt) }{{} Update dx t = αx t dt + db t (1) dz t = γx t dt + dw t (2) Kalman Filter Observation: Prediction: dz t = γx t dt + dw t dẑ t = γ ˆX t dt - + Kalman Filter R. E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng. (1961); R. E. Kalman and R. S. Bucy. New Results in Liner Filtering and Prediction Theory. J. Basic Eng. (1961). Feedback Particle Filter Prashant Mehta 5 / 34

24 Kalman filter Solution in linear Gaussian settings Kalman filter: p = N( ˆX t,σ t ) d ˆX t = α ˆX t dt + K(dZ t γ ˆX t dt) }{{} Update dx t = αx t dt + db t (1) dz t = γx t dt + dw t (2) Kalman Filter Observation: Prediction: dz t = γx t dt + dw t dẑ t = γ ˆX t dt - + Innov. error: di t = dz t dẑ t = dz t γ ˆX t dt Kalman Filter R. E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng. (1961); R. E. Kalman and R. S. Bucy. New Results in Liner Filtering and Prediction Theory. J. Basic Eng. (1961). Feedback Particle Filter Prashant Mehta 5 / 34

25 Kalman filter Solution in linear Gaussian settings Kalman filter: p = N( ˆX t,σ t ) d ˆX t = α ˆX t dt + K(dZ t γ ˆX t dt) }{{} Update dx t = αx t dt + db t (1) dz t = γx t dt + dw t (2) Kalman Filter Observation: Prediction: dz t = γx t dt + dw t dẑ t = γ ˆX t dt - + Innov. error: di t = dz t dẑ t = dz t γ ˆX t dt Control: du t = K di t Kalman Filter R. E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng. (1961); R. E. Kalman and R. S. Bucy. New Results in Liner Filtering and Prediction Theory. J. Basic Eng. (1961). Feedback Particle Filter Prashant Mehta 5 / 34

26 Kalman filter Solution in linear Gaussian settings Kalman filter: p = N( ˆX t,σ t ) d ˆX t = α ˆX t dt + K(dZ t γ ˆX t dt) }{{} Update dx t = αx t dt + db t (1) dz t = γx t dt + dw t (2) Kalman Filter Observation: Prediction: dz t = γx t dt + dw t dẑ t = γ ˆX t dt - + Innov. error: di t = dz t dẑ t = dz t γ ˆX t dt Control: du t = K di t Kalman Filter Gain: Kalman gain R. E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng. (1961); R. E. Kalman and R. S. Bucy. New Results in Liner Filtering and Prediction Theory. J. Basic Eng. (1961). Feedback Particle Filter Prashant Mehta 5 / 34

27 Kalman filter d ˆX t = α ˆX t dt }{{} Prediction + K(dZ t γ ˆX t dt) }{{} Update Simple enough to be included in the first undergraduate course on control! Feedback Particle Filter Prashant Mehta 6 / 34

28 Kalman filter d ˆX t = α ˆX t dt }{{} Prediction + K(dZ t γ ˆX t dt) }{{} Update - + Kalman Filter Simple enough to be included in the first undergraduate course on control! Feedback Particle Filter Prashant Mehta 6 / 34

29 Kalman filter d ˆX t = α ˆX t dt }{{} Prediction + K(dZ t γ ˆX t dt) }{{} Update - + Kalman Filter This illustrates the key features of feedback control: 1 Use error to obtain control (du t = K di t ) 2 Negative gain feedback serves to reduce error (K = γ σw 2 Σ t ) }{{} SNR Simple enough to be included in the first undergraduate course on control! Feedback Particle Filter Prashant Mehta 6 / 34

30 Pretty Formulae in Mathematics More often than not, these are simply stated Euler s identity e iπ = 1 Euler s formula v e + f = 2 Pythagoras theorem x 2 + y 2 = z 2 Kenneth Chang. What Makes an Equation Beautiful? in The New York Times on October 24, 2004 Feedback Particle Filter Prashant Mehta 7 / 34

31 Filtering Problem Nonlinear Model: Kushner-Stratonovich PDE Signal & Observations dx t = a(x t )dt + db t, (1) dz t = h(x t )dt + dw t (2) Posterior distribution p is a solution of a stochastic PDE: dp = L (p )dt + 1 (h ĥ)(dz t ĥdt)p σw 2 where ĥ = E[h(X t ) Z t ] = h(x)p (x,t)dx L (p ) = (p a(x)) p x 2 x 2 R. L. Stratonovich. Conditional Markov Processes. Theory Probab. Appl. (1960); H. J. Kushner. On the differential equations satisfied by conditional probability densities of Markov processes. SIAM J. Control (1964). Feedback Particle Filter Prashant Mehta 8 / 34

32 Filtering Problem Nonlinear Model: Kushner-Stratonovich PDE Signal & Observations dx t = a(x t )dt + db t, (1) dz t = h(x t )dt + dw t (2) Posterior distribution p is a solution of a stochastic PDE: dp = L (p )dt + 1 (h ĥ)(dz t ĥdt)p σw 2 where ĥ = E[h(X t ) Z t ] = h(x)p (x,t)dx L (p ) = (p a(x)) p x 2 x 2 R. L. Stratonovich. Conditional Markov Processes. Theory Probab. Appl. (1960); H. J. Kushner. On the differential equations satisfied by conditional probability densities of Markov processes. SIAM J. Control (1964). Feedback Particle Filter Prashant Mehta 8 / 34

33 Filtering Problem Nonlinear Model: Kushner-Stratonovich PDE Signal & Observations dx t = a(x t )dt + db t, (1) dz t = h(x t )dt + dw t (2) Posterior distribution p is a solution of a stochastic PDE: dp = L (p )dt + 1 (h ĥ)(dz t ĥdt)p σw 2 where ĥ = E[h(X t ) Z t ] = h(x)p (x,t)dx L (p ) = (p a(x)) p x 2 x 2 No closed-form solution in general. Closure problem. R. L. Stratonovich. Conditional Markov Processes. Theory Probab. Appl. (1960); H. J. Kushner. On the differential equations satisfied by conditional probability densities of Markov processes. SIAM J. Control (1964). Feedback Particle Filter Prashant Mehta 8 / 34

34 Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N N δ X i(x) t i=1 Algorithm outline 1 Initialization at time 0: X i 0 p 0 ( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) J. E. Handschin and D. Q. Mayne. Monte-Carlo techniques to estimate conditional expectation in nonlinear filtering. Int. J. Control (1969); N. Gordon, D. Salmond, A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proc. F Radar Signal Process (1993); J. Xiong. Particle approximation to filtering problems in continuous time. The Oxford handbook of nonlinear filtering (2011); A. Budhiraja, L. Chen, C. Lee. A survey of numerical methods for nonlinear filtering problems. Physica D (2007). Feedback Particle Filter Prashant Mehta 9 / 34

35 Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N N δ X i(x) t i=1 Algorithm outline 1 Initialization at time 0: X i 0 p 0 ( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) J. E. Handschin and D. Q. Mayne. Monte-Carlo techniques to estimate conditional expectation in nonlinear filtering. Int. J. Control (1969); N. Gordon, D. Salmond, A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proc. F Radar Signal Process (1993); J. Xiong. Particle approximation to filtering problems in continuous time. The Oxford handbook of nonlinear filtering (2011); A. Budhiraja, L. Chen, C. Lee. A survey of numerical methods for nonlinear filtering problems. Physica D (2007). Feedback Particle Filter Prashant Mehta 9 / 34

36 Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N N δ X i(x) t i=1 Algorithm outline 1 Initialization at time 0: X i 0 p 0 ( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) J. E. Handschin and D. Q. Mayne. Monte-Carlo techniques to estimate conditional expectation in nonlinear filtering. Int. J. Control (1969); N. Gordon, D. Salmond, A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proc. F Radar Signal Process (1993); J. Xiong. Particle approximation to filtering problems in continuous time. The Oxford handbook of nonlinear filtering (2011); A. Budhiraja, L. Chen, C. Lee. A survey of numerical methods for nonlinear filtering problems. Physica D (2007). Feedback Particle Filter Prashant Mehta 9 / 34

37 Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N N δ X i(x) t i=1 Algorithm outline 1 Initialization at time 0: X i 0 p 0 ( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) e.g. dz t = X t dt + small noise J. E. Handschin and D. Q. Mayne. Monte-Carlo techniques to estimate conditional expectation in nonlinear filtering. Int. J. Control (1969); N. Gordon, D. Salmond, A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proc. F Radar Signal Process (1993); J. Xiong. Particle approximation to filtering problems in continuous time. The Oxford handbook of nonlinear filtering (2011); A. Budhiraja, L. Chen, C. Lee. A survey of numerical methods for nonlinear filtering problems. Physica D (2007). Feedback Particle Filter Prashant Mehta 9 / 34

38 Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N N δ X i(x) t i=1 Algorithm outline 1 Initialization at time 0: X i 0 p 0 ( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) J. E. Handschin and D. Q. Mayne. Monte-Carlo techniques to estimate conditional expectation in nonlinear filtering. Int. J. Control (1969); N. Gordon, D. Salmond, A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proc. F Radar Signal Process (1993); J. Xiong. Particle approximation to filtering problems in continuous time. The Oxford handbook of nonlinear filtering (2011); A. Budhiraja, L. Chen, C. Lee. A survey of numerical methods for nonlinear filtering problems. Physica D (2007). Feedback Particle Filter Prashant Mehta 9 / 34

39 Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N N δ X i(x) t i=1 Algorithm outline 1 Initialization at time 0: X i 0 p 0 ( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) J. E. Handschin and D. Q. Mayne. Monte-Carlo techniques to estimate conditional expectation in nonlinear filtering. Int. J. Control (1969); N. Gordon, D. Salmond, A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proc. F Radar Signal Process (1993); J. Xiong. Particle approximation to filtering problems in continuous time. The Oxford handbook of nonlinear filtering (2011); A. Budhiraja, L. Chen, C. Lee. A survey of numerical methods for nonlinear filtering problems. Physica D (2007). Feedback Particle Filter Prashant Mehta 9 / 34

40 Particle Filter An algorithm to solve nonlinear filtering problem Approximate posterior in terms of particles p (x,t) = 1 N N δ X i(x) t i=1 Algorithm outline 1 Initialization at time 0: X i 0 p 0 ( ) 2 At each discrete time step: Importance sampling (Bayes update step) Resampling (for variance reduction) Innovation error, feedback? And most importantly, is this pretty? J. E. Handschin and D. Q. Mayne. Monte-Carlo techniques to estimate conditional expectation in nonlinear filtering. Int. J. Control (1969); N. Gordon, D. Salmond, A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proc. F Radar Signal Process (1993); J. Xiong. Particle approximation to filtering problems in continuous time. The Oxford handbook of nonlinear filtering (2011); A. Budhiraja, L. Chen, C. Lee. A survey of numerical methods for nonlinear filtering problems. Physica D (2007). Feedback Particle Filter Prashant Mehta 9 / 34

41 Feedback Particle Filter A control-oriented approach Signal & Observations dx t = a(x t )dt + db t (1) dz t = h(x t )dt + dw t (2) Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007). Related approaches: D. Crisan and J. Xiong, Approximate McKean-Vlasov representations for a class of SPDEs. Stochastics (2009); S. K. Mitter and N. J. Newton. A variational approach to nonlinear estimation. SIAM J. Control Optimiz. (2003); F. Daum and J. Huang. Generalized particle flow for nonlinear filters. Proc. SPIE (2010); S. Reich, A dynamical systems framework for intermittent data assimilation. BIT Numer. Math. (2011). Feedback Particle Filter Prashant Mehta 10 / 34 P

42 Feedback Particle Filter A control-oriented approach Controlled system (N particles): Signal & Observations dx t = a(x t )dt + db t (1) dz t = h(x t )dt + dw t (2) dxt i = a(xt)dt i + db i t + dut i, i = 1,...,N (3) }{{} mean-field control {B i t} N i=1 are ind. standard white noises. Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007). Related approaches: D. Crisan and J. Xiong, Approximate McKean-Vlasov representations for a class of SPDEs. Stochastics (2009); S. K. Mitter and N. J. Newton. A variational approach to nonlinear estimation. SIAM J. Control Optimiz. (2003); F. Daum and J. Huang. Generalized particle flow for nonlinear filters. Proc. SPIE (2010); S. Reich, A dynamical systems framework for intermittent data assimilation. BIT Numer. Math. (2011). Feedback Particle Filter Prashant Mehta 10 / 34 P

43 Feedback Particle Filter A control-oriented approach Controlled system (N particles): Variational approach: Signal & Observations dx t = a(x t )dt + db t (1) dz t = h(x t )dt + dw t (2) dxt i = a(xt)dt i + db i t + dut i, i = 1,...,N (3) }{{} mean-field control 1. Gradient flow construction: Nonlinear filter is shown to be a gradient flow (steepest descent) 2. Optimal transport: Derivation of the feedback particle filter {B i t} N i=1 are ind. standard white noises. Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007). Related approaches: D. Crisan and J. Xiong, Approximate McKean-Vlasov representations for a class of SPDEs. Stochastics (2009); S. K. Mitter and N. J. Newton. A variational approach to nonlinear estimation. SIAM J. Control Optimiz. (2003); F. Daum and J. Huang. Generalized particle flow for nonlinear filters. Proc. SPIE (2010); S. Reich, A dynamical systems framework for intermittent data assimilation. BIT Numer. Math. (2011). Feedback Particle Filter Prashant Mehta 10 / 34 P

44 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Kalman filter Observation: dz t = h(x t )dt + dw t dz t = γx t dt + dw t Yang, Mehta and Meyn. Feedback Particle Filter. IEEE TAC (2013). Feedback Particle Filter Prashant Mehta 11 / 34 P

45 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Kalman filter Observation: dz t = h(x t )dt + dw t dz t = γx t dt + dw t Prediction: dẑ i t = h(xi t )+ĥ 2 dt dẑ t = γ ˆX t dt ĥ = 1 N N i=1 h(xi t) Yang, Mehta and Meyn. Feedback Particle Filter. IEEE TAC (2013). Feedback Particle Filter Prashant Mehta 11 / 34 P

46 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Kalman filter Observation: dz t = h(x t )dt + dw t dz t = γx t dt + dw t Prediction: dẑ i t = h(xi t )+ĥ 2 dt dẑ t = γ ˆX t dt ĥ = 1 N N i=1 h(xi t) Innov. error: dit i = dz t dẑt i di t = dz t dẑ t = dz t h(xi t )+ĥ 2 dt = dz t γ ˆX t dt Yang, Mehta and Meyn. Feedback Particle Filter. IEEE TAC (2013). Feedback Particle Filter Prashant Mehta 11 / 34 P

47 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Kalman filter Observation: dz t = h(x t )dt + dw t dz t = γx t dt + dw t Prediction: dẑ i t = h(xi t )+ĥ 2 dt dẑ t = γ ˆX t dt ĥ = 1 N N i=1 h(xi t) Innov. error: dit i = dz t dẑt i di t = dz t dẑ t = dz t h(xi t )+ĥ 2 dt = dz t γ ˆX t dt Control: du i t = K(X i t) di i t du t = K di t Yang, Mehta and Meyn. Feedback Particle Filter. IEEE TAC (2013). Feedback Particle Filter Prashant Mehta 11 / 34 P

48 Update Step How does feedback particle filter implement Bayes rule? Feedback particle filter Kalman filter Observation: dz t = h(x t )dt + dw t dz t = γx t dt + dw t Prediction: dẑ i t = h(xi t )+ĥ 2 dt dẑ t = γ ˆX t dt ĥ = 1 N N i=1 h(xi t) Innov. error: dit i = dz t dẑt i di t = dz t dẑ t = dz t h(xi t )+ĥ 2 dt = dz t γ ˆX t dt Control: du i t = K(X i t) di i t du t = K di t Gain: K is a solution of a linear BVP K is the Kalman gain Main Result: FPF is an exact algorithm (in the mean-field, N, limit). Yang, Mehta and Meyn. Feedback Particle Filter. IEEE TAC (2013). Feedback Particle Filter Prashant Mehta 11 / 34 P

49 Variance Reduction Filtering for a linear model. Mean-square error: 1 T ( T 0 Σ (N) t ) 2 Σ t dt Σ t MSE 10 1 Bootstrap (BPF) 10 2 Feedback (FPF) N (number of particles) Feedback Particle Filter Prashant Mehta 12 / 34 P

50 The Next Few Slides Results 1 and 2 1. Gradient flow construction: 1 Nonlinear filter is shown to be a gradient flow 2. Optimal transport: 1 Derivation of feedback particle filter Poisson s equation is central to both 1 and 2 Details appear in: Laugesen, Mehta, Meyn and Raginsky. Poisson s equation in nonlinear filtering. SIAM J. Control Optimiz. (2015); Also see: Yang, Mehta and Meyn. Feedback particle filter. IEEE Trans. Automat. Control (2013); Yang, Laugesen, Mehta and Meyn. Multivariable feedback particle filter. Automatica (To Appear). Feedback Particle Filter Prashant Mehta 13 / 34 P

51 The Next Few Slides Results 1 and 2 1. Gradient flow construction: 1 Nonlinear filter is shown to be a gradient flow 2. Optimal transport: 1 Derivation of feedback particle filter Poisson s equation is central to both 1 and 2 Details appear in: Laugesen, Mehta, Meyn and Raginsky. Poisson s equation in nonlinear filtering. SIAM J. Control Optimiz. (2015); Also see: Yang, Mehta and Meyn. Feedback particle filter. IEEE Trans. Automat. Control (2013); Yang, Laugesen, Mehta and Meyn. Multivariable feedback particle filter. Automatica (To Appear). Feedback Particle Filter Prashant Mehta 13 / 34 P

52 Poisson s Equation Review of various mathematical forms Strong form: }{{} Laplacian. = 2 φ(x) = h(x) c on domain Ω Feedback Particle Filter Prashant Mehta 14 / 34 P

53 Poisson s Equation Review of various mathematical forms Strong form: }{{} Laplacian. = 2 φ(x) = h(x) c on domain Ω Weak form: φ ψ dx = (h c)ψ dx test fns. ψ H 1 Feedback Particle Filter Prashant Mehta 14 / 34 P

54 Poisson s Equation Review of various mathematical forms Strong form: }{{} Laplacian. = 2 φ(x) = h(x) c on domain Ω Weak form: Generalization: φ ψ dx = (h c)ψ dx test fns. ψ H 1 φ ψ p(x)dx = (h c)ψ p(x)dx ψ H 1 Feedback Particle Filter Prashant Mehta 14 / 34 P

55 Poisson s Equation Review of various mathematical forms Strong form: }{{} Laplacian. = 2 φ(x) = h(x) c on domain Ω Weak form: Generalization: φ ψ dx = (h c)ψ dx test fns. ψ H 1 φ ψ p(x)dx = (h c)ψ p(x)dx ψ H 1 or: E p [ φ ψ] = E p [(h ĥ)ψ] ψ H 1 Feedback Particle Filter Prashant Mehta 14 / 34 P

56 Poisson s Equation Review of various mathematical forms Strong form: }{{} Laplacian. = 2 φ(x) = h(x) c on domain Ω Weak form: Generalization: φ ψ dx = (h c)ψ dx test fns. ψ H 1 φ ψ p(x)dx = (h c)ψ p(x)dx ψ H 1 or: E p [ φ ψ] = E p [(h ĥ)ψ] ψ H 1 where ĥ = E p [h] = h(x)p(x) dx Feedback Particle Filter Prashant Mehta 14 / 34 P

57 Poisson s Equation Review of various mathematical forms Strong form: }{{} Laplacian. = 2 φ(x) = h(x) c on domain Ω Weak form: Generalization: φ ψ dx = (h c)ψ dx test fns. ψ H 1 φ ψ p(x)dx = (h c)ψ p(x)dx ψ H 1 or: E p [ φ ψ] = E p [(h ĥ)ψ] ψ H 1 where ĥ = E p [h] = h(x)p(x) dx Strong form: (p(x) φ)(x) = (h(x) c)p(x) Feedback Particle Filter Prashant Mehta 14 / 34 P

58 Poisson s Equation in Physics This equation is fundamental to many fields! Electric potential: 1 4π 2 φ = ρ charge density Gravitational potential: Temperature: 1 4πG 2 φ = ρ mass density κ 2 φ = q heat-flux density Walter Strauss. Partial Differential Equations. Wiley (1992). Feedback Particle Filter Prashant Mehta 15 / 34 P

59 Gradient flow An elementary example Time stepping procedure x (t + t) = arg min y 1 2 y x (t) 2 + t h(y) Feedback Particle Filter Prashant Mehta 16 / 34 P

60 Gradient flow An elementary example Time stepping procedure x (t + t) = arg min y 1 2 y x (t) 2 + t h(y) Calculus 101: x (t + t) = x (t) t h(x (t + t)) Feedback Particle Filter Prashant Mehta 16 / 34 P

61 Gradient flow An elementary example Time stepping procedure x (t + t) = arg min y 1 2 y x (t) 2 + t h(y) Calculus 101: Cont. limit: x (t + t) = x (t) t h(x (t + t)) dx dt = h(x ). Feedback Particle Filter Prashant Mehta 16 / 34 P

62 Gradient flow An elementary example Time stepping procedure x (t + t) = arg min y 1 2 y x (t) 2 }{{} metric + t h(y) }{{} min Feedback Particle Filter Prashant Mehta 17 / 34 P

63 Gradient flow An elementary example Time stepping procedure x (t + t) = arg min y 1 2 y x (t) 2 }{{} metric + t h(y) }{{} min Calc. of Variation: x (t + t),ψ = x (t),ψ t h(x (t + t)),ψ Feedback Particle Filter Prashant Mehta 17 / 34 P

64 Gradient flow An elementary example Time stepping procedure x (t + t) = arg min y 1 2 y x (t) 2 }{{} metric + t h(y) }{{} min Calc. of Variation: Cont. limit: x (t + t),ψ = x (t),ψ t h(x (t + t)),ψ t x (t),ψ = x (0),ψ h(x (s)),ψ ds. 0 Feedback Particle Filter Prashant Mehta 17 / 34 P

65 Gradient Flow Interpretation of Heat Equation 1998 paper of Jordon, Kinderlehrer and Otto Time stepping procedure p 1 t+ t = arg min ρ P 2 W2 2 (ρ,p t ) + t ρ(x) lnρ(x) dx Jordon, Kinderlehrer and Otto. The variational formulation of the Fokker-Planck equation. SIAM J. Math. Anal., 29 (1998) Feedback Particle Filter Prashant Mehta 18 / 34 P

66 Gradient Flow Interpretation of Heat Equation 1998 paper of Jordon, Kinderlehrer and Otto Time stepping procedure p 1 t+ t = arg min ρ P 2 W2 2 (ρ,p t ) + t ρ(x) lnρ(x) dx E-L equation:... Jordon, Kinderlehrer and Otto. The variational formulation of the Fokker-Planck equation. SIAM J. Math. Anal., 29 (1998) Feedback Particle Filter Prashant Mehta 18 / 34 P

67 Gradient Flow Interpretation of Heat Equation 1998 paper of Jordon, Kinderlehrer and Otto Time stepping procedure p 1 t+ t = arg min ρ P 2 W2 2 (ρ,p t ) + t ρ(x) lnρ(x) dx E-L equation:... Cont. limit: p t = 2 p Jordon, Kinderlehrer and Otto. The variational formulation of the Fokker-Planck equation. SIAM J. Math. Anal., 29 (1998) Feedback Particle Filter Prashant Mehta 18 / 34 P

68 Gradient Flow for Nonlinear Filter Construction via a time-stepping procedure Signal & Observations dx t = 0, dz t = h(x t )dt + dw t Time stepping procedure p t+ t = arg min ρ P where Y t := Z t+ t Z t. t D(ρ p t ) + t ρ(x)(y t h(x)) 2 dx, 2 Laugesen, Mehta, Meyn and Raginsky. Poisson s Equation in Nonlinear Filtering. SIAM J. Control Optim (2015) Feedback Particle Filter Prashant Mehta 19 / 34 P

69 Gradient Flow Interpretation of Nonlinear Filter Construction via a time-stepping procedure Time stepping procedure p t+ t = arg min ρ P D(ρ p t ) + t ρ(x)(y t h(x)) 2 dx, 2 S. K. Mitter and N. J. Newton. A variational approach to nonlinear estimation. SIAM J. Control Optimiz. (2003); Feedback Particle Filter Prashant Mehta 20 / 34 P

70 Gradient Flow Interpretation of Nonlinear Filter Construction via a time-stepping procedure Time stepping procedure p t+ t = arg min ρ P D(ρ p t ) + t ρ(x)(y t h(x)) 2 dx, 2 E-L equation:... S. K. Mitter and N. J. Newton. A variational approach to nonlinear estimation. SIAM J. Control Optimiz. (2003); Feedback Particle Filter Prashant Mehta 20 / 34 P

71 Gradient Flow Interpretation of Nonlinear Filter Construction via a time-stepping procedure Time stepping procedure p t+ t = arg min ρ P D(ρ p t ) + t ρ(x)(y t h(x)) 2 dx, 2 E-L equation:... Cont. limit: dp = (h ĥ)(dz t ĥdt)p S. K. Mitter and N. J. Newton. A variational approach to nonlinear estimation. SIAM J. Control Optimiz. (2003); Feedback Particle Filter Prashant Mehta 20 / 34 P

72 Gradient Flow Interpretation of Nonlinear Filter Poisson s equation? E-L equation: E p t+ t [ψ] = E p t [ψ] + E p t+ t [( Z t h t) h ς] Feedback Particle Filter Prashant Mehta 21 / 34 P

73 Gradient Flow Interpretation of Nonlinear Filter Poisson s equation? E-L equation: E p t+ t [ψ] = E p t [ψ] + E p t+ t [( Z t h t) h ς] Poisson s equation: (p t (x) ς(x)) = (ψ(x) ˆψ t )p t (x) Feedback Particle Filter Prashant Mehta 21 / 34 P

74 Gradient Flow Interpretation of Nonlinear Filter Poisson s equation? E-L equation: E p t+ t [ψ] = E p t [ψ] + E p t+ t [( Z t h t) h ς] Poisson s equation: (p t (x) ς(x)) = (ψ(x) ˆψ t )p t (x) Assumption: Spectral gap For some λ 0 > 0, and for all functions ψ H 1 with E p 0 [ψ] = 0, ψ(x) 2 p 0 (x)dx 1 ψ(x) 2 p 0 λ (x)dx. [PI(λ 0)] 0 Feedback Particle Filter Prashant Mehta 21 / 34 P

75 Gradient Flow Interpretation of Nonlinear Filter Result 1: Derivation of nonlinear filter Result 1 Certain Technical conditions. The density p is a weak solution of the nonlinear filter with prior p 0. That is, for any test function ψ C c(r d ), t ψ,p t = ψ,p 0 + (h ĥ s )(dz s ĥ s ds)ψ,p s, 0 where ψ,p t =. ψ(x)p (x,t)dx. Feedback Particle Filter Prashant Mehta 22 / 34 P

76 Optimal Transport Result 2: Derivation of feedback particle filter Optimization problem J (N) (s ). = min s ( I tn (s # tn (p t n )) t ) t n 2 Y2 t n, Feedback Particle Filter Prashant Mehta 23 / 34 P

77 Optimal Transport Result 2: Derivation of feedback particle filter Optimization problem J (N) (s ). = min s ( I tn (s # tn (p t n )) t ) t n 2 Y2 t n, I t (ρ) =. D(ρ p t ) + t ρ(x)(y t h(x)) 2 dx 2 Feedback Particle Filter Prashant Mehta 23 / 34 P

78 Optimal Transport Result 2: Derivation of feedback particle filter Optimization problem J (N) (s ). = min s ( I tn (s # tn (p t n )) t ) t n 2 Y2 t n, I t (ρ) =. D(ρ p t ) + t ρ(x)(y t h(x)) 2 dx 2 s t # : Optimal transport Feedback Particle Filter Prashant Mehta 23 / 34 P

79 Optimal Transport Result 2: Derivation of feedback particle filter Optimization problem J (N) (s ). = min s ( I tn (s # tn (p t n )) t ) t n 2 Y2 t n, I t (ρ) =. D(ρ p t ) + t ρ(x)(y t h(x)) 2 dx 2 s t # : Optimal transport s t : dx i t = u(x i t,t)dt + K(X i t,t)dz t Feedback Particle Filter Prashant Mehta 23 / 34 P

80 Feedback Particle Filter Algorithm summary Signal: Observations: dx t = a(x t )dt + db t dz t = h(x t )dt + dw t Feedback Particle Filter Prashant Mehta 24 / 34 P

81 Feedback Particle Filter Algorithm summary Signal: Observations: dx t = a(x t )dt + db t dz t = h(x t )dt + dw t Problem: Approximate the posterior distribution p (x,t). Feedback Particle Filter Prashant Mehta 24 / 34 P

82 Feedback Particle Filter Algorithm summary Signal: Observations: dx t = a(x t )dt + db t dz t = h(x t )dt + dw t Problem: Approximate the posterior distribution p (x,t). FPF Algo.: dxt i = a(xt)dt i + db i t ( + K(X i,t) dz t 1 ) 2 (h(xi t) + ĥ t )dt }{{} FPF control Feedback Particle Filter Prashant Mehta 24 / 34 P

83 Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem Gain Fn.: K = φ (p φ) = (h ĥ)p solved at each time-step. Feedback Particle Filter Prashant Mehta 25 / 34 P

84 Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem Gain Fn.: K = φ (p φ) = (h ĥ)p solved at each time-step. Linear case: Feedback Particle Filter Prashant Mehta 25 / 34 P

85 Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem Gain Fn.: K = φ (p φ) = (h ĥ)p solved at each time-step. Linear case: Feedback Particle Filter Prashant Mehta 25 / 34 P

86 Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem Gain Fn.: K = φ (p φ) = (h ĥ)p solved at each time-step. Linear case: Nonlinear case: Feedback Particle Filter Prashant Mehta 25 / 34 P

87 Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem Gain Fn.: K = φ (p φ) = (h ĥ)p solved at each time-step. Linear case: Nonlinear case: Feedback Particle Filter Prashant Mehta 25 / 34 P

88 Boundary Value Problem Euler-Lagrange equation for the variational problem Multi-dimensional boundary value problem Gain Fn.: K = φ (p φ) = (h ĥ)p solved at each time-step. Linear case: Nonlinear case: Feedback Particle Filter Prashant Mehta 25 / 34 P

89 Summary Kalman Filter Feedback Particle Filter Kalman Filter Innovation Error: di t = dz t h( ˆX)dt Feedback Particle Filter Innovation Error: dit i = dz t 1 ( ) h(x i 2 t ) + ĥ t dt Gain Function: K = Kalman Gain Gain Function: K is solution of a linear BVP. Yang, Mehta and Meyn. Feedback Particle Filter. IEEE TAC (2013). Feedback Particle Filter Prashant Mehta 26 / 34 P

90 Summary Kalman Filter Feedback Particle Filter Kalman Filter Innovation Error: di t = dz t h( ˆX)dt Feedback Particle Filter Innovation Error: dit i = dz t 1 ( ) h(x i 2 t ) + ĥ t dt Gain Function: K = Kalman Gain Gain Function: K is solution of a linear BVP. Yang, Mehta and Meyn. Feedback Particle Filter. IEEE TAC (2013). Feedback Particle Filter Prashant Mehta 26 / 34 P

91 Summary Kalman Filter Feedback Particle Filter Kalman Filter Innovation Error: di t = dz t h( ˆX)dt Feedback Particle Filter Innovation Error: dit i = dz t 1 ( ) h(x i 2 t ) + ĥ t dt Gain Function: K = Kalman Gain Gain Function: K is solution of a linear BVP. Yang, Mehta and Meyn. Feedback Particle Filter. IEEE TAC (2013). Feedback Particle Filter Prashant Mehta 26 / 34 P

92 Coupled Oscillators Kuramoto model dθ i t = ( ω i + κ N ) N sin(θt j θt i ) j=1 dt + σ dξ i t, i = 1,...,N ω i taken from distribution g(ω) over [1 γ,1 + γ] γ measures the heterogeneity of the population κ measures the strength of coupling Y. Kuramoto. Self-entrainment of a population of coupled nonlinear oscillators (1975); Strogatz and Mirollo. Stability of incoherence in a population of coupled oscillators. J. Stat. Phy. (1991); N. Kopell and G. B. Ermentrout. Symmetry and phaselocking in chains of weakly coupled oscillators. Commun. Pure Appl. Math. (1986) Feedback Particle Filter Prashant Mehta 27 / 34 P

93 Coupled Oscillators Kuramoto model dθ i t = ( ω i + κ N ) N sin(θt j θt i ) j=1 dt + σ dξ i t, i = 1,...,N ω i taken from distribution g(ω) over [1 γ,1 + γ] γ measures the heterogeneity of the population κ measures the strength of coupling Y. Kuramoto. Self-entrainment of a population of coupled nonlinear oscillators (1975); Strogatz and Mirollo. Stability of incoherence in a population of coupled oscillators. J. Stat. Phy. (1991); N. Kopell and G. B. Ermentrout. Symmetry and phaselocking in chains of weakly coupled oscillators. Commun. Pure Appl. Math. (1986) Feedback Particle Filter Prashant Mehta 27 / 34 P

94 Coupled Oscillators Kuramoto model dθ i t = ( ω i + κ N ) N sin(θt j θt i ) j=1 dt + σ dξ i t, i = 1,...,N ω i taken from distribution g(ω) over [1 γ,1 + γ] γ measures the heterogeneity of the population κ measures the strength of coupling Y. Kuramoto. Self-entrainment of a population of coupled nonlinear oscillators (1975); Strogatz and Mirollo. Stability of incoherence in a population of coupled oscillators. J. Stat. Phy. (1991); N. Kopell and G. B. Ermentrout. Symmetry and phaselocking in chains of weakly coupled oscillators. Commun. Pure Appl. Math. (1986) Feedback Particle Filter Prashant Mehta 27 / 34 P

95 Coupled Oscillators Kuramoto model dθ i t = ( ω i + κ N ) N sin(θt j θt i ) j=1 dt + σ dξ i t, i = 1,...,N ω i taken from distribution g(ω) over [1 γ,1 + γ] γ measures the heterogeneity of the population κ measures the strength of coupling 0.3 Synchrony 0.2 Incoherence Y. Kuramoto. Self-entrainment of a population of coupled nonlinear oscillators (1975); Strogatz and Mirollo. Stability of incoherence in a population of coupled oscillators. J. Stat. Phy. (1991); N. Kopell and G. B. Ermentrout. Symmetry and phaselocking in chains of weakly coupled oscillators. Commun. Pure Appl. Math. (1986) Feedback Particle Filter Prashant Mehta 27 / 34 P

96 Hodgkin-Huxley type Neuron model Normal form reduction C dv = g T m 2 dt (V) h (V E T ) g h r (V E h )... dh dt = h (V) h τ h (V) dr dt = r (V) r τ r (V) Voltage Neural spike train time J. Guckenheimer. Isochrons and phaseless sets. J. Math. Biol. (1975); Brown, Moehlis and Holmes. On the phase reduction and response dynamics of neural oscillator populations. Neural Computation (2004); E. M. Izhikevich. Dynamical Systems in Neuroscience. in Chapter 10. The MIT Press (2006). Feedback Particle Filter Prashant Mehta 28 / 34 P

97 Hodgkin-Huxley type Neuron model Normal form reduction C dv = g T m 2 dt (V) h (V E T ) g h r (V E h )... dh dt = h (V) h τ h (V) dr dt = r (V) r τ r (V) Voltage Neural spike train time J. Guckenheimer. Isochrons and phaseless sets. J. Math. Biol. (1975); Brown, Moehlis and Holmes. On the phase reduction and response dynamics of neural oscillator populations. Neural Computation (2004); E. M. Izhikevich. Dynamical Systems in Neuroscience. in Chapter 10. The MIT Press (2006). Feedback Particle Filter Prashant Mehta 28 / 34 P

98 r Hodgkin-Huxley type Neuron model Normal form reduction C dv = g T m 2 dt (V) h (V E T ) g h r (V E h )... dh dt = h (V) h τ h (V) dr dt = r (V) r τ r (V) Voltage Neural spike train time r Limit cyle h h V 0 v J. Guckenheimer. Isochrons and phaseless sets. J. Math. Biol. (1975); Brown, Moehlis and Holmes. On the phase reduction and response dynamics of neural oscillator populations. Neural Computation (2004); E. M. Izhikevich. Dynamical Systems in Neuroscience. in Chapter 10. The MIT Press (2006). Feedback Particle Filter Prashant Mehta 28 / 34 P

99 r Hodgkin-Huxley type Neuron model Normal form reduction C dv = g T m 2 dt (V) h (V E T ) g h r (V E h )... dh dt = h (V) h τ h (V) dr dt = r (V) r τ r (V) Voltage Neural spike train time r Limit cyle Normal form reduction h h V 0 v θ i = ω i + u i Φ(θ i ) J. Guckenheimer. Isochrons and phaseless sets. J. Math. Biol. (1975); Brown, Moehlis and Holmes. On the phase reduction and response dynamics of neural oscillator populations. Neural Computation (2004); E. M. Izhikevich. Dynamical Systems in Neuroscience. in Chapter 10. The MIT Press (2006). Feedback Particle Filter Prashant Mehta 28 / 34 P

100 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθ t = ω 0 dt }{{} + noise natural frequency Feedback Particle Filter Prashant Mehta 29 / 34 P

101 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθ t = ω 0 dt }{{} + noise natural frequency Feedback Particle Filter Prashant Mehta 29 / 34 P

102 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθ t = ω 0 dt }{{} + noise natural frequency Feedback Particle Filter Prashant Mehta 29 / 34 P

103 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθ t = ω 0 dt }{{} + noise natural frequency Feedback Particle Filter Prashant Mehta 29 / 34 P

104 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθ t = ω 0 dt }{{} + noise natural frequency Feedback Particle Filter Prashant Mehta 29 / 34 P

105 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθ t = ω 0 dt }{{} + noise natural frequency Feedback Particle Filter Prashant Mehta 29 / 34 P

106 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθ t = ω 0 dt }{{} + noise natural frequency Feedback Particle Filter Prashant Mehta 29 / 34 P

107 Gait Cycle Signal model Stance phase Swing phase Model (Noisy oscillator) dθ t = ω 0 dt }{{} + noise natural frequency Feedback Particle Filter Prashant Mehta 29 / 34 P

108 Simulation Results Solution of the Estimation of Gait Cycle Problem noisy measurements feedback particle filter dynamics [Click to play the movie] estimate Tilton, Hsiao-Wecksler and Mehta. Filtering with rhythms: Application to estimation of gait cycle. American Control Conference (2012). Feedback Particle Filter Prashant Mehta 30 / 34 P

109 Geometric Control Locomotion Systems shape variables: x 1,x 2 group variable: ψ 3-body system P. S. Krishnaprasad. Geometric phases and optimal reconfiguration for multibody systems. UM Tech. Report (1990); P. S. Krishnaprasad. Motion control and coupled oscillators. Procs. of Symp. on Motion, Control & Geometry. Natl. Acad. Sciences (1995); R. Brockett. Pattern generation and the control of nonlinear systems. IEEE TAC (2003); S. Kelly and R. Murray. Geometric phases and robotic locomotion. J. Robotic Systems (1995); R. Murray and S. Sastry. Nonholonomic motion planning: Steering using sinusoids. IEEE TAC (1993); J. Blair and T. Iwasaki. Optimal gaits for mechanical rectifier systems. IEEE TAC (2011); Feedback Particle Filter Prashant Mehta 31 / 34 P

110 Geometric Control Locomotion Systems shape variables: x 1,x 2 group variable: ψ 3-body system P. S. Krishnaprasad. Geometric phases and optimal reconfiguration for multibody systems. UM Tech. Report (1990); P. S. Krishnaprasad. Motion control and coupled oscillators. Procs. of Symp. on Motion, Control & Geometry. Natl. Acad. Sciences (1995); R. Brockett. Pattern generation and the control of nonlinear systems. IEEE TAC (2003); S. Kelly and R. Murray. Geometric phases and robotic locomotion. J. Robotic Systems (1995); R. Murray and S. Sastry. Nonholonomic motion planning: Steering using sinusoids. IEEE TAC (1993); J. Blair and T. Iwasaki. Optimal gaits for mechanical rectifier systems. IEEE TAC (2011); Feedback Particle Filter Prashant Mehta 31 / 34 P

111 Geometric Control Locomotion Systems shape variables: x 1,x 2 group variable: ψ 3-body system P. S. Krishnaprasad. Geometric phases and optimal reconfiguration for multibody systems. UM Tech. Report (1990); P. S. Krishnaprasad. Motion control and coupled oscillators. Procs. of Symp. on Motion, Control & Geometry. Natl. Acad. Sciences (1995); R. Brockett. Pattern generation and the control of nonlinear systems. IEEE TAC (2003); S. Kelly and R. Murray. Geometric phases and robotic locomotion. J. Robotic Systems (1995); R. Murray and S. Sastry. Nonholonomic motion planning: Steering using sinusoids. IEEE TAC (1993); J. Blair and T. Iwasaki. Optimal gaits for mechanical rectifier systems. IEEE TAC (2011); Feedback Particle Filter Prashant Mehta 31 / 34 P

112 Geometric Control Locomotion Systems shape variables: x 1,x 2 group variable: ψ 3-body system ẍ = f (x,ẋ,τ) τ: torque input P. S. Krishnaprasad. Geometric phases and optimal reconfiguration for multibody systems. UM Tech. Report (1990); P. S. Krishnaprasad. Motion control and coupled oscillators. Procs. of Symp. on Motion, Control & Geometry. Natl. Acad. Sciences (1995); R. Brockett. Pattern generation and the control of nonlinear systems. IEEE TAC (2003); S. Kelly and R. Murray. Geometric phases and robotic locomotion. J. Robotic Systems (1995); R. Murray and S. Sastry. Nonholonomic motion planning: Steering using sinusoids. IEEE TAC (1993); J. Blair and T. Iwasaki. Optimal gaits for mechanical rectifier systems. IEEE TAC (2011); Feedback Particle Filter Prashant Mehta 31 / 34 P

113 Geometric Control Locomotion Systems shape variables: x 1,x 2 group variable: ψ 3-body system ẍ = f (x,ẋ,τ) τ: torque input ψ = a 1 (x)ẋ 1 + a 2 (x)ẋ 2 Reconstruction equation P. S. Krishnaprasad. Geometric phases and optimal reconfiguration for multibody systems. UM Tech. Report (1990); P. S. Krishnaprasad. Motion control and coupled oscillators. Procs. of Symp. on Motion, Control & Geometry. Natl. Acad. Sciences (1995); R. Brockett. Pattern generation and the control of nonlinear systems. IEEE TAC (2003); S. Kelly and R. Murray. Geometric phases and robotic locomotion. J. Robotic Systems (1995); R. Murray and S. Sastry. Nonholonomic motion planning: Steering using sinusoids. IEEE TAC (1993); J. Blair and T. Iwasaki. Optimal gaits for mechanical rectifier systems. IEEE TAC (2011); Feedback Particle Filter Prashant Mehta 31 / 34 P

114 Control of Locomotion Gaits 2-body System ψ = f (x)ẋ A. Taghvaei, S. Hutchinson and P. G. Mehta. A coupled-oscillators-based control architecture for locomotory gaits. IEEE CDC (2014). Feedback Particle Filter Prashant Mehta 32 / 34 P

115 Control of Locomotion Gaits 2-body System ψ = f (x)ẋ = f (θ) A. Taghvaei, S. Hutchinson and P. G. Mehta. A coupled-oscillators-based control architecture for locomotory gaits. IEEE CDC (2014). Feedback Particle Filter Prashant Mehta 32 / 34 P

116 Control of Locomotion Gaits 2-body System ψ = f (x)ẋ = f (θ,u) A. Taghvaei, S. Hutchinson and P. G. Mehta. A coupled-oscillators-based control architecture for locomotory gaits. IEEE CDC (2014). Feedback Particle Filter Prashant Mehta 32 / 34 P

117 Control of Locomotion Gaits 2-body System ψ = f (x)ẋ = f (θ,u) [ min E u [0,T] ψ(t) ψ(0) }{{} Geom. phase + 1 T ] u(t) 2 dt 2ε 0 A. Taghvaei, S. Hutchinson and P. G. Mehta. A coupled-oscillators-based control architecture for locomotory gaits. IEEE CDC (2014). Feedback Particle Filter Prashant Mehta 32 / 34 P

118 2-body system, Simulation Result x(t) Observation: y(t) Particles True Phase x π 3 0 π 3 t q open loop: q1(t) close loop: q1(t) t [Click to play the movie] A. Taghvaei, S. Hutchinson and P. G. Mehta. A coupled-oscillators-based control architecture for locomotory gaits. IEEE CDC (2014). Feedback Particle Filter Prashant Mehta 33 / 34 P

119 Acknowledgement Students in red 1. Feedback particle filter: Tao Yang, Rick Laugesen, Sean Meyn, Max Raginsky 2. Coupled oscillators for estimation: Adam Tilton, Shane Ghiotto, Liz Hsiao-Wecksler 3. Coupled oscillators for control: Amirhossein Taghvaei, Seth Hutchinson Research supported by NSF Feedback Particle Filter Prashant Mehta 34 / 34 P

Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms

Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms Qualcomm/Brain Corporation/INC Lecture Series on Computational Neuroscience University of California, San Diego, March 5, 2012

More information

Data assimilation as an optimal control problem and applications to UQ

Data assimilation as an optimal control problem and applications to UQ Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017

More information

A Comparative Study of Nonlinear Filtering Techniques

A Comparative Study of Nonlinear Filtering Techniques A Comparative Study of onlinear Filtering Techniques Adam K. Tilton, Shane Ghiotto and Prashant G. Mehta Abstract In a recent work it is shown that importance sampling can be avoided in the particle filter

More information

c 2014 Krishna Kalyan Medarametla

c 2014 Krishna Kalyan Medarametla c 2014 Krishna Kalyan Medarametla COMPARISON OF TWO NONLINEAR FILTERING TECHNIQUES - THE EXTENDED KALMAN FILTER AND THE FEEDBACK PARTICLE FILTER BY KRISHNA KALYAN MEDARAMETLA THESIS Submitted in partial

More information

Multi-dimensional Feedback Particle Filter for Coupled Oscillators

Multi-dimensional Feedback Particle Filter for Coupled Oscillators 3 American Control Conference (ACC) Washington, DC, USA, June 7-9, 3 Multi-dimensional Feedback Particle Filter for Coupled Oscillators Adam K. Tilton, Prashant G. Mehta and Sean P. Meyn Abstract This

More information

c 2014 Shane T. Ghiotto

c 2014 Shane T. Ghiotto c 2014 Shane T. Ghiotto COMPARISON OF NONLINEAR FILTERING TECHNIQUES BY SHANE T. GHIOTTO THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Mechanical Engineering

More information

arxiv: v1 [math.na] 26 Feb 2013

arxiv: v1 [math.na] 26 Feb 2013 1 Feedback Particle Filter Tao Yang, Prashant G. Mehta and Sean P. Meyn Abstract arxiv:1302.6563v1 [math.na] 26 Feb 2013 A new formulation of the particle filter for nonlinear filtering is presented, based

More information

Synchronization of coupled oscillators is a game. Prashant G. Mehta 1

Synchronization of coupled oscillators is a game. Prashant G. Mehta 1 CSL COORDINATED SCIENCE LABORATORY Synchronization of coupled oscillators is a game Prashant G. Mehta 1 1 Coordinated Science Laboratory Department of Mechanical Science and Engineering University of Illinois

More information

Gain Function Approximation in the Feedback Particle Filter

Gain Function Approximation in the Feedback Particle Filter Gain Function Approimation in the Feedback Particle Filter Amirhossein Taghvaei, Prashant G. Mehta Abstract This paper is concerned with numerical algorithms for gain function approimation in the feedback

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

arxiv: v3 [math.oc] 21 Dec 2017

arxiv: v3 [math.oc] 21 Dec 2017 Kalman Filter and its Modern Extensions for the Continuous-time onlinear Filtering Problem arxiv:172.7241v3 [math.oc] 21 Dec 217 This paper is concerned with the filtering problem in continuous-time. Three

More information

Gain Function Approximation in the Feedback Particle Filter

Gain Function Approximation in the Feedback Particle Filter Gain Function Approimation in the Feedback Particle Filter Amirhossein Taghvaei, Prashant G. Mehta Abstract This paper is concerned with numerical algorithms for gain function approimation in the feedback

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,

More information

A NEW NONLINEAR FILTER

A NEW NONLINEAR FILTER COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed

More information

Comparison of Gain Function Approximation Methods in the Feedback Particle Filter

Comparison of Gain Function Approximation Methods in the Feedback Particle Filter MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Comparison of Gain Function Approximation Methods in the Feedback Particle Filter Berntorp, K. TR218-12 July 13, 218 Abstract This paper is

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

Data assimilation Schrödinger s perspective

Data assimilation Schrödinger s perspective Data assimilation Schrödinger s perspective Sebastian Reich (www.sfb1294.de) Universität Potsdam/ University of Reading IMS NUS, August 3, 218 Universität Potsdam/ University of Reading 1 Core components

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Sean P. Meyn Joint work with Darshan Shirodkar and Prashant Mehta Coordinated Science Laboratory and the Department of

More information

Feedback Particle Filter:Application and Evaluation

Feedback Particle Filter:Application and Evaluation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Feedback Particle Filter:Application and Evaluation Berntorp, K. TR2015-074 July 2015 Abstract Recent research has provided several new methods

More information

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering 2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Synchronization and Phase Oscillators

Synchronization and Phase Oscillators 1 Synchronization and Phase Oscillators Richard Bertram Department of Mathematics and Programs in Neuroscience and Molecular Biophysics Florida State University Tallahassee, Florida 32306 Synchronization

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Path integrals for classical Markov processes

Path integrals for classical Markov processes Path integrals for classical Markov processes Hugo Touchette National Institute for Theoretical Physics (NITheP) Stellenbosch, South Africa Chris Engelbrecht Summer School on Non-Linear Phenomena in Field

More information

Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics

Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics Meng Xu Department of Mathematics University of Wyoming February 20, 2010 Outline 1 Nonlinear Filtering Stochastic Vortex

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

TD Learning. Sean Meyn. Department of Electrical and Computer Engineering University of Illinois & the Coordinated Science Laboratory

TD Learning. Sean Meyn. Department of Electrical and Computer Engineering University of Illinois & the Coordinated Science Laboratory TD Learning Sean Meyn Department of Electrical and Computer Engineering University of Illinois & the Coordinated Science Laboratory Joint work with I.-K. Cho, Illinois S. Mannor, McGill V. Tadic, Sheffield

More information

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010

Dynamical systems in neuroscience. Pacific Northwest Computational Neuroscience Connection October 1-2, 2010 Dynamical systems in neuroscience Pacific Northwest Computational Neuroscience Connection October 1-2, 2010 What do I mean by a dynamical system? Set of state variables Law that governs evolution of state

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Average-cost temporal difference learning and adaptive control variates

Average-cost temporal difference learning and adaptive control variates Average-cost temporal difference learning and adaptive control variates Sean Meyn Department of ECE and the Coordinated Science Laboratory Joint work with S. Mannor, McGill V. Tadic, Sheffield S. Henderson,

More information

Introduction to numerical simulations for Stochastic ODEs

Introduction to numerical simulations for Stochastic ODEs Introduction to numerical simulations for Stochastic ODEs Xingye Kan Illinois Institute of Technology Department of Applied Mathematics Chicago, IL 60616 August 9, 2010 Outline 1 Preliminaries 2 Numerical

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

A nonparametric ensemble transform method for Bayesian inference

A nonparametric ensemble transform method for Bayesian inference A nonparametric ensemble transform method for Bayesian inference Article Published Version Reich, S. (2013) A nonparametric ensemble transform method for Bayesian inference. SIAM Journal on Scientific

More information

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Spall John Wiley and Sons, Inc., 2003 Preface... xiii 1. Stochastic Search

More information

EnKF-based particle filters

EnKF-based particle filters EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.

More information

Rough Burgers-like equations with multiplicative noise

Rough Burgers-like equations with multiplicative noise Rough Burgers-like equations with multiplicative noise Martin Hairer Hendrik Weber Mathematics Institute University of Warwick Bielefeld, 3.11.21 Burgers-like equation Aim: Existence/Uniqueness for du

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References 24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained

More information

Particle Filters: Convergence Results and High Dimensions

Particle Filters: Convergence Results and High Dimensions Particle Filters: Convergence Results and High Dimensions Mark Coates mark.coates@mcgill.ca McGill University Department of Electrical and Computer Engineering Montreal, Quebec, Canada Bellairs 2012 Outline

More information

Stochastic Modelling in Climate Science

Stochastic Modelling in Climate Science Stochastic Modelling in Climate Science David Kelly Mathematics Department UNC Chapel Hill dtbkelly@gmail.com November 16, 2013 David Kelly (UNC) Stochastic Climate November 16, 2013 1 / 36 Why use stochastic

More information

Kramers formula for chemical reactions in the context of Wasserstein gradient flows. Michael Herrmann. Mathematical Institute, University of Oxford

Kramers formula for chemical reactions in the context of Wasserstein gradient flows. Michael Herrmann. Mathematical Institute, University of Oxford eport no. OxPDE-/8 Kramers formula for chemical reactions in the context of Wasserstein gradient flows by Michael Herrmann Mathematical Institute, University of Oxford & Barbara Niethammer Mathematical

More information

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Stochastic Integration and Stochastic Differential Equations: a gentle introduction Stochastic Integration and Stochastic Differential Equations: a gentle introduction Oleg Makhnin New Mexico Tech Dept. of Mathematics October 26, 27 Intro: why Stochastic? Brownian Motion/ Wiener process

More information

2.152 Course Notes Contraction Analysis MIT, 2005

2.152 Course Notes Contraction Analysis MIT, 2005 2.152 Course Notes Contraction Analysis MIT, 2005 Jean-Jacques Slotine Contraction Theory ẋ = f(x, t) If Θ(x, t) such that, uniformly x, t 0, F = ( Θ + Θ f x )Θ 1 < 0 Θ(x, t) T Θ(x, t) > 0 then all solutions

More information

Homogenization with stochastic differential equations

Homogenization with stochastic differential equations Homogenization with stochastic differential equations Scott Hottovy shottovy@math.arizona.edu University of Arizona Program in Applied Mathematics October 12, 2011 Modeling with SDE Use SDE to model system

More information

A stochastic particle system for the Burgers equation.

A stochastic particle system for the Burgers equation. A stochastic particle system for the Burgers equation. Alexei Novikov Department of Mathematics Penn State University with Gautam Iyer (Carnegie Mellon) supported by NSF Burgers equation t u t + u x u

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Level-Set Minimization of Potential Controlled Hadwiger Valuations for Molecular Solvation

Level-Set Minimization of Potential Controlled Hadwiger Valuations for Molecular Solvation Level-Set Minimization of Potential Controlled Hadwiger Valuations for Molecular Solvation Bo Li Dept. of Math & NSF Center for Theoretical Biological Physics UC San Diego, USA Collaborators: Li-Tien Cheng

More information

Bayesian Inverse problem, Data assimilation and Localization

Bayesian Inverse problem, Data assimilation and Localization Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Stochastic Gradient Descent in Continuous Time

Stochastic Gradient Descent in Continuous Time Stochastic Gradient Descent in Continuous Time Justin Sirignano University of Illinois at Urbana Champaign with Konstantinos Spiliopoulos (Boston University) 1 / 27 We consider a diffusion X t X = R m

More information

Stat 451 Lecture Notes Numerical Integration

Stat 451 Lecture Notes Numerical Integration Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29

More information

Closed-Loop Impulse Control of Oscillating Systems

Closed-Loop Impulse Control of Oscillating Systems Closed-Loop Impulse Control of Oscillating Systems A. N. Daryin and A. B. Kurzhanski Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics Periodic Control Systems, 2007

More information

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016 Spatial Statistics Spatial Examples More Spatial Statistics with Image Analysis Johan Lindström 1 1 Mathematical Statistics Centre for Mathematical Sciences Lund University Lund October 6, 2016 Johan Lindström

More information

Feedback Particle Filter with Data-Driven Gain-Function Approximation

Feedback Particle Filter with Data-Driven Gain-Function Approximation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Feedback Particle Filter with Data-Driven Gain-Function Approximation Berntorp, K.; Grover, P. TR2018-034 February 2018 Abstract This paper

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

The Game of Twenty Questions with noisy answers. Applications to Fast face detection, micro-surgical tool tracking and electron microscopy

The Game of Twenty Questions with noisy answers. Applications to Fast face detection, micro-surgical tool tracking and electron microscopy The Game of Twenty Questions with noisy answers. Applications to Fast face detection, micro-surgical tool tracking and electron microscopy Graduate Summer School: Computer Vision July 22 - August 9, 2013

More information

ML Estimation of Process Noise Variance in Dynamic Systems

ML Estimation of Process Noise Variance in Dynamic Systems ML Estimation of Process Noise Variance in Dynamic Systems Patrik Axelsson, Umut Orguner, Fredrik Gustafsson and Mikael Norrlöf {axelsson,umut,fredrik,mino} @isy.liu.se Division of Automatic Control Department

More information

Imprecise Filtering for Spacecraft Navigation

Imprecise Filtering for Spacecraft Navigation Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

Latent state estimation using control theory

Latent state estimation using control theory Latent state estimation using control theory Bert Kappen SNN Donders Institute, Radboud University, Nijmegen Gatsby Unit, UCL London August 3, 7 with Hans Christian Ruiz Bert Kappen Smoothing problem Given

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 8: Importance Sampling 8.1 Importance Sampling Importance sampling

More information

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou 1 / 41 Journal Club Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology 2009-12-11 2 / 41 Outline 1 Motivation Diffusion process

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

A conjecture on sustained oscillations for a closed-loop heat equation

A conjecture on sustained oscillations for a closed-loop heat equation A conjecture on sustained oscillations for a closed-loop heat equation C.I. Byrnes, D.S. Gilliam Abstract The conjecture in this paper represents an initial step aimed toward understanding and shaping

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

What do we know about EnKF?

What do we know about EnKF? What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant. David Kelly (NYU) EnKF April

More information

Geometric projection of stochastic differential equations

Geometric projection of stochastic differential equations Geometric projection of stochastic differential equations John Armstrong (King s College London) Damiano Brigo (Imperial) August 9, 2018 Idea: Projection Idea: Projection Projection gives a method of systematically

More information

2012 NCTS Workshop on Dynamical Systems

2012 NCTS Workshop on Dynamical Systems Barbara Gentz gentz@math.uni-bielefeld.de http://www.math.uni-bielefeld.de/ gentz 2012 NCTS Workshop on Dynamical Systems National Center for Theoretical Sciences, National Tsing-Hua University Hsinchu,

More information

Sampling the posterior: An approach to non-gaussian data assimilation

Sampling the posterior: An approach to non-gaussian data assimilation Physica D 230 (2007) 50 64 www.elsevier.com/locate/physd Sampling the posterior: An approach to non-gaussian data assimilation A. Apte a, M. Hairer b, A.M. Stuart b,, J. Voss b a Department of Mathematics,

More information

Inverse problems and uncertainty quantification in remote sensing

Inverse problems and uncertainty quantification in remote sensing 1 / 38 Inverse problems and uncertainty quantification in remote sensing Johanna Tamminen Finnish Meterological Institute johanna.tamminen@fmi.fi ESA Earth Observation Summer School on Earth System Monitoring

More information

Stochastic differential equations in neuroscience

Stochastic differential equations in neuroscience Stochastic differential equations in neuroscience Nils Berglund MAPMO, Orléans (CNRS, UMR 6628) http://www.univ-orleans.fr/mapmo/membres/berglund/ Barbara Gentz, Universität Bielefeld Damien Landon, MAPMO-Orléans

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

A random perturbation approach to some stochastic approximation algorithms in optimization.

A random perturbation approach to some stochastic approximation algorithms in optimization. A random perturbation approach to some stochastic approximation algorithms in optimization. Wenqing Hu. 1 (Presentation based on joint works with Chris Junchi Li 2, Weijie Su 3, Haoyi Xiong 4.) 1. Department

More information

The Theory of Weakly Coupled Oscillators

The Theory of Weakly Coupled Oscillators The Theory of Weakly Coupled Oscillators Michael A. Schwemmer and Timothy J. Lewis Department of Mathematics, One Shields Ave, University of California Davis, CA 95616 1 1 Introduction 2 3 4 5 6 7 8 9

More information

Neuroscience applications: isochrons and isostables. Alexandre Mauroy (joint work with I. Mezic)

Neuroscience applications: isochrons and isostables. Alexandre Mauroy (joint work with I. Mezic) Neuroscience applications: isochrons and isostables Alexandre Mauroy (joint work with I. Mezic) Outline Isochrons and phase reduction of neurons Koopman operator and isochrons Isostables of excitable systems

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm

Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm Qiang Liu and Dilin Wang NIPS 2016 Discussion by Yunchen Pu March 17, 2017 March 17, 2017 1 / 8 Introduction Let x R d

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 1: Introduction 19 Winter M106 Class: MWF 12:50-1:55 pm @ 200

More information

The Hierarchical Particle Filter

The Hierarchical Particle Filter and Arnaud Doucet http://go.warwick.ac.uk/amjohansen/talks MCMSki V Lenzerheide 7th January 2016 Context & Outline Filtering in State-Space Models: SIR Particle Filters [GSS93] Block-Sampling Particle

More information

A new iterated filtering algorithm

A new iterated filtering algorithm A new iterated filtering algorithm Edward Ionides University of Michigan, Ann Arbor ionides@umich.edu Statistics and Nonlinear Dynamics in Biology and Medicine Thursday July 31, 2014 Overview 1 Introduction

More information

Monte Carlo Approximation of Monte Carlo Filters

Monte Carlo Approximation of Monte Carlo Filters Monte Carlo Approximation of Monte Carlo Filters Adam M. Johansen et al. Collaborators Include: Arnaud Doucet, Axel Finke, Anthony Lee, Nick Whiteley 7th January 2014 Context & Outline Filtering in State-Space

More information

Alexei F. Cheviakov. University of Saskatchewan, Saskatoon, Canada. INPL seminar June 09, 2011

Alexei F. Cheviakov. University of Saskatchewan, Saskatoon, Canada. INPL seminar June 09, 2011 Direct Method of Construction of Conservation Laws for Nonlinear Differential Equations, its Relation with Noether s Theorem, Applications, and Symbolic Software Alexei F. Cheviakov University of Saskatchewan,

More information

GSHMC: An efficient Markov chain Monte Carlo sampling method. Sebastian Reich in collaboration with Elena Akhmatskaya (Fujitsu Laboratories Europe)

GSHMC: An efficient Markov chain Monte Carlo sampling method. Sebastian Reich in collaboration with Elena Akhmatskaya (Fujitsu Laboratories Europe) GSHMC: An efficient Markov chain Monte Carlo sampling method Sebastian Reich in collaboration with Elena Akhmatskaya (Fujitsu Laboratories Europe) 1. Motivation In the first lecture, we started from a

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Practical Bayesian Optimization of Machine Learning. Learning Algorithms

Practical Bayesian Optimization of Machine Learning. Learning Algorithms Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information