Particle Filters in High Dimensions

Size: px
Start display at page:

Download "Particle Filters in High Dimensions"

Transcription

1 Particle Filters in High Dimensions Dan Crisan Imperial College London Workshop - Simulation and probability: recent trends The Henri Lebesgue Center for Mathematics 5-8 June 2018 Rennes Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

2 . Part 1: Theoretical Considerations Stochastic Filtering Particle filters/ Sequential Monte Carlo methods Convergence Result Final remarks DC, Particle Filters. A Theoretical Perspective, Sequential Monte Carlo Methods in Practice, DC, A Doucet, A survey of convergence results on particle filtering methods for practitioners, IEEE Transactions on signal processing, A Doucet, AM Johansen, A tutorial on particle filtering and smoothing: Fifteen years later, The Oxford handbook of nonlinear filtering, P. Del Moral. Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications. Springer, A. Bain, DC, Fundamentals of Stochastic Filtering, Springer, DC, B Rozovskii, The Oxford handbook of nonlinear filtering, Oxford University Press, Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

3 . What is stochastic filtering? Stochastic Filtering: The process of using partial observations and a stochastic model to make inferences about an evolving dynamical system. X the signal process - hidden component Y the observation process - the data Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

4 . What is stochastic filtering? The filtering problem: Find the conditional distribution of the signal X t given Y t = σ(y s, s [0, t]), i.e., π t (A) = P(X t A Y t ), t 0, A B(R d ). Discrete framework: {X t, Y t } t 0 Markov process The signal process {X t } t 0 Markov chain, X 0 π 0 (dx 0 ) P (X t dx t X t 1 = x t 1 ) = K t (x t 1, dx t ) = f t (x t x t 1 )dt, Example : X t = b (X t 1 ) + σ (X t 1 ) B t, The observationprocess B t N (0, 1) i.i.d. P ( Y t dy t X [0,t] = x [0,t], Y [0,t 1] = y [0,t 1] ) = P (Yt dy t X t = x t ) = g t (y t x t )dy Example : Y t = h (X t ) + V t, where X [0,t] (X 0,..., X t ), x [0,t] (x 0,..., x t ). V t N (0, 1) i.i.d. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

5 . What is stochastic filtering? Notation: posterior measure: the conditional distribution of the signal X t given Y t π t (A) = P(X t A Y t ), t 0, A B(R d ). predictive measure: the conditional distribution of the signal X t given Y t 1 p t (A) = P(X t A Y t 1 ), t 0, A B(R d ). If μ is a measure and f is a function, then μ (f ) f (x)μ (dx). If f is a function and k is a kernel, then kf (x) f (y)k (x, dy). If μ is a measure and k is a kernel, then μk (A) μ (dx) k (x, A). Bayes recursion. Prediction Updating p t = π t 1 K t π t = g t p t In other words, dπt dp t = C 1 t g t, where C t R d g t (y t, x t ) p t (dx t ). Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

6 . Particle filters 1. Class of approximations: Particle filters/sequential Monte Carlo Methods: SMC (a j (t), vj 1 (t),..., vj d (t)) N j=1 }{{}}{{} weight position π t πt N = N j=1 a j (t) δ vj (t) 2. The law of evolution of the approximation: π N t 1 SMC selection {}}{ pt N {}}{ πt N mutation 3. The measure of the approximating error: sup E [ πt n (ϕ) π t (ϕ) ], ˆπ t ˆπ t n. ϕ B(R d ) Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

7 . The classical/standard/bootstrap/garden-variety particle filter π n = {π n (t), t 0} the occupation measure of a system of weighted particles π n (0) = n i=1 1 n δ xi n π n (t) = n i=1 ā n i (t)δ V n i (t). DC, Particle Filters. A Theoretical Perspective, Sequential Monte Carlo Methods in Practice, P. Del Moral. Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications. Springer, Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

8 1. Initialisation [t = 0]. The Filtering Problem For i = 1,..., N, sample x (i) 0 from π 0, Framework: discrete/continuous time π N 0 = 1 N N i=1 δ (i) x Iteration [t 1 to t]. Let x (i) t 1, i = 1,..., n be the positions of the particles at time t 1. π N t 1 = 1 N N i=1 δ (i) x. t 1 Step 1. For i = 1,..., n, sample ˉx (i) t from f t 1 (x t x (i) t 1 )dx t. p N t = 1 N N i=1 (i). δˉx t Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

9 Framework: discrete/continuous time Compute the (normalized) weight ā (i) t = g t (ˉx (i) t )/( n j=1 g t(ˉx (j) t )). ˉπ N t = N i=1 ā (i) (i) t = g δˉx t pt N. t Step 2. Replace each particle by ξ (i) t offsprings such that n i=1 ξ(i) t = n. [Sample with replacement n-times from ˉx (i) t, ] Denote the positions of the particles by x (i) t, i = 1,..., n. Further details in: π N t = 1 N N i=1 δ (i) x. t Bain, A., DC, Fundamentals of Stochastic Filtering, Series: Stochastic Modelling and Applied Probability, Vol. 60, Springer Verlag, Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

10 Framework: discrete/continuous time Theorem π n converges to π. Moreover sup sup t [0,T ] { ϕ 1} E Y [ π N t (ϕ) π t (ϕ) ] c T N. and N(π N π) converges to a measure valued process ū = {ū t, t 0}. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

11 Notation: The Filtering Problem Framework: discrete/continuous time Error(π, T, N) = sup t [0,T ] sup { ϕ 1} EY [ π N t (ϕ) π t (ϕ) ] Error(p, T, N) = sup t [0,T ] sup { ϕ 1} EY [ p N t (ϕ) p t (ϕ) ] Theorem sup sup t [0,T ] { ϕ 1} For all T > 0, there exists c T such that E Y [ π N t (ϕ) π t (ϕ) ] c T N. Error(π, T, N) c T N, Error(p, T, N) c T N if and only if Error(π, 0, N) c 0 N and, for all T > 0, there exists c T such that sup sup t [0,T ] { ϕ 1} sup sup t [0,T ] { ϕ 1} E Y [ p N t (ϕ) π N t 1K t (ϕ) ] c T N E Y [ π N t (ϕ) ˉπ N t (ϕ) ] c T N. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

12 Framework: discrete/continuous time Proof. Immediate from the following two inequalities pt N ϕ πt 1K N t ϕ pt N ϕ p t ϕ + π t 1 (K t ϕ) πt 1(K N t ϕ), πt N ϕ ˉπ t N ϕ πt N ϕ π t ϕ + π t ϕ ˉπ t N ϕ where we used the fact that p t = π t 1 K t. Induction. The case t = 0 is assumed. The induction step is obtained as follows: Since p t = π t 1 K t by the triangle inequality Also p N t ϕ p t ϕ p N t ϕ π N t 1K t ϕ + π N t 1K t ϕ π t 1 K t ϕ. ˉπ N t ϕ π t ϕ= pn t (ϕg t ) p N t g t p t(ϕg t ) p t g t = pn t (ϕg t ) p N t g t p t g t (p N t g t p t g t )+ ( p N t (ϕg t ) p t(ϕg t ) p t g t p t g t and as pt N (ϕg t ) ϕ pt N g t, ˉπ t N ϕ π t ϕ ϕ p N 1 p p t g t g t p t g t + N t p t g t (ϕg t ) p t (ϕg t ). t Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55 ),

13 Framework: discrete/continuous time Remarks: Particle filters are recursive algorithms: The approximation for π t and Y t+1 are the only information used in order to obtain the approximation for π t+1. In other words, the information gained from Y 1,..., Y t is embedded in the current approximation. The generic SMC method involves sampling from the prior distribution of the signal and then using a weighted bootstrap technique (or equivalent) with weights defined by the likelihood of the most recent observation data. Step 2 can be done by means of sampling with replacement (SIR algorithm), stratified sampling, Bernoulli sampling, Carpenter-Clifford-Fearnhead-Whitley genetic algorithm, Crisan-Lyons TBBA algorithm. All these methods satisfy the convergence requirement. If d is small to moderate, then the standard particle filter can perform very well in the time parameter n. Under certain conditions, the Monte Carlo error of the estimate of the filter can be uniform with respect to the time parameter. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

14 Remarks: The Filtering Problem Framework: discrete/continuous time The function x k g(x k, y k ) can convey a lot of information about the hidden state, especially so in high dimensions. If this is the case, using the prior transition kernel f (x k 1, x k ) as proposal will be ineffective. It is then known that the standard particle filter will typically perform poorly in this context, often requiring that N = O(κ d ). Wall clock time per time step (seconds) Algorithm PF STPF Dimension Figure: Computational cost per time step to achieve a predetermined RMSE versus model dimension, for standard particle filter (PF) and STPF. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

15 Application to high-dimensional problems Why is the high-dimensional filtering problem hard? A running example Using particle filers to solve high-dimensional filtering problems Final remarks Research partially supported by EPSRC grant EP/N023781/1. Numerical work done by Wei Pan (Imperial College London). A. Beskos, DC, A. Jasra, Ajay; K. Kamatani, Y. Zhou, Y A stable particle filter for a class of high-dimensional state-space models. Adv. in Appl. Probab. 49 (2017). A. Beskos, DC, A. Jasra, On the stability of sequential Monte Carlo methods in high dimensions, Ann. Appl. Probab. 24 (2014). C.J. Cotter, DC, D.D. Holm, W. Pan, I. Shevchenko, Numerically Modelling Stochastic Lie Transport in Fluid Dynamics, C.J. Cotter, DC, D.D. Holm, W. Pan, I. Shevchenko, Sequential Monte Carlo for Stochastic Advection by Lie Transport (SALT): A case study for the damped and forced incompressible 2D stochastic Euler equation, in preparation. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

16 Why is the high-dimensional problem hard? Consider Π 0 = N (0, 1) (mean 0 and variance matrix 1). Π 1 = N (1, 1) (mean 1 and variance matrix 1). Π d = N (d, 1) (mean d and variance matrix 1). d(π 0, Π 1 ) TV = 2 P [ X 1/2 ], X N(0, 1). d(π 0, Π d ) TV = 2 P [ X d/2 ], X N(0, 1). as d increases, the two measures get further and further apart, becoming singular w.r.t. each other. as d increases, it becomes increasingly harder to use standard importance sampling, to construct a sample from Π 3 by using a proposal from Π 1, weighting it using dπ d dπ 0 and (possibly) resample from it. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

17 Why is the high-dimensional problem hard? Consider Π 0 = N ((0,..., 0), I d ) (mean (0,..., 0) and covariance matrix I d ). Π d = N ((1,..., 1), I d ) (mean (1,..., 1) and covariance matrix I d ). d(π 0, Π d ) TV = 2 P [ X d/2 ], X N(0, 1). as d increases, the two measures get further and further apart, becoming singular w.r.t. each other exponentially fast. it becomes increasingly harder to use standard importance sampling, to construct a sample from Π d by using a proposal from Π 0. Moving from Π 0 to Π d is equivalent to moving from a standard normal distribution N (0, 1) to a normal distribution N (d, 1) (the total variation distance between N (0, 1) and N (d, 1) is the same as that between Π 1 and Π 2 ). Add-on techniques: Tempering * Optimal transport prior posterior Sequential DA in space * Jittering * Model Reduction (High Low Res)* Nudging Hybrid models Hamiltonian Monte Carlo Informed priors Localization Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

18 The Filtering Problem What is DA? State estimation in Numerical Weather Prediction Data Assimilation at the UK Met Office set of methodologies that combines past knowledge of a system in the form of a numerical model with new information about that system in the form of observations of that system. designed to improve forecasting, reduce model uncertainties and adjust model parameters. termen used mainly in the computational geoscience community major component of Numerical Weather Prediction Variational DA: combines the model and the data through the optimisation of a given criterion (minimisation of a so-called cost-function). Ensemble based DA: uses a set of model trajectories/possible scenarios that are intermittently updated according to data and are used to infer the past, current or future position of a system. Hurricane Irma forecast: a. ECMWF, b. USA Global Forecast Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

19 A stochastic transport model Consider a two dimensional incompressible fluid flow u defined on 2D-torus Ω = [0, L x ] [0, L y ] modelled by the two-dimensional Euler equations with forcing and dampening. Let q = ẑ curl u denote the vorticity of u, where ẑ denotes the z-axis. For a scalar field g : Ω R, we write g = ( y g, x g) T. Let ψ : Ω [0, ) R denote the stream function. t q + (u ) q = Q rq u = ψ Δψ = q. Q is the forcing term given by Q = 0.1 sin (8πx) r is a positive constant - the large scale dissipation time scale. we consider slip flow boundary condition ψ Ω = 0. evolution of Lagrangian fluid parcels dx t dt = u(x t, t). Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

20 A stochastic transport model Domain is [0, 1] 2 PDE System SPDE System t ω + u ω = Q rω dq + ū qdt + i ξ i q dw i t = (Q rq) dt u = ψ ū = ψ Δψ = ω Δ ψ = q Q = 0.1 sin (8πx), r = Boundary Condition ψ Ω = 0 and ψ = 0. Ω PDE SPDE Grid Resolution 512x512 64x64 Numerical Δt Spin-up 40 ett ett: eddy turnover time L/u L 2.5 time units. Numerical scheme: a mixed continuous and discontinuous Galerkin finite element scheme + an optimal third order strong stability preserving Runge-Kutta, [Bernsen et al 2006, Gottlieb 2005]. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

21 Initial configuration for the vorticity A stochastic transport model ω spin = sin(8πx) sin(8πy) cos(6πx) cos(6πy) cos(10πx) cos(4πy) sin(2πy) sin(2πx) from which we spin up the system until an energy equilibrium state seems to have been reached. This equilibrium state, denoted by ω initial, is then chosen as the initial condition. (1) Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

22 A stochastic transport model Plot of the numerical PDE solution at the initial time t initial and its coarse-grained version done via spatial averaging and projection of the fine grid stream-function to the coarse grid. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

23 A stochastic transport model Plot of the numerical PDE solution at the final time t = t initial large eddy turnover times (ett). The coarse-graining is done via spatial averaging and projection of the fine grid streamfunction to the coarse grid. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

24 A stochastic transport model Observations: u is observed on a subgrid of the signal grid (9 9 points) { u SPDE t (x) + αz x, z x N (0, 1) Experiment 1 Y t (x) = u PDE t (x) + αz x, z x N (0, 1) Experiment 2 α is calibrated to the standard deviation of the true solution over a coarse grid cell. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

25 Initial condition Initial Condition A good choice of the initial condition is esential for the successful implementation of the filter. In practice it is a reflection of the level of uncertainty of the estimate of initial position of the dynamical system. We use the initial condition is to obtain an ensemble which contain particles that are reasonably close to the truth. Choice for the running example deformation - physically consistent with the system, casimirs preserved. We take a nominal value ω t0 and deform it using the following modified Euler equation: tω + β i u(τ i ) ω = 0 (2) where β i N (0, ɛ), i = 1,..., N p are centered Gaussian weights with an apriori variance parameter ɛ, and τ i U (t initial, t 0 ), i = 1,..., N p are uniform random numbers. Thus each u (τ i ) corresponds to a PDE solution in the time period [t initial, t 0 ). Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

26 Initial condition Alternative choices q + ζ where ζ is gaussian random field, doable but not physical, only works for q because it s the least smooth of the three fields of interest. The other fields are spatially smooth. also this breaks the SPDE well-posedness theorem (function space regularity). Figure (ux,uy) Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

27 Initial condition directly perturb ψ, by ψ + ˉψ where ˉψ = (I κδ) 1 ζ invert elliptic operator with boundary condition ˉψ = 0. Figure (ux, uy) Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

28 Add-on techniques Model Reduction (High Low Res)* Model reduction is a valuable methodology that can lead to substantial computational savings: For the current numerical example we perform and state space order reduction from to = However if applied erroneously order reduction it can introduce large errors. Recall that the recursion formula for the conditional distribution of the signal where dπt dp t p t = π t 1 K t π t = g t p t, = C 1 t g t, where C t R d g t (y t, x t ) p t (dx t ). Remark. The conditional distribution of the signal is a continuous function of (π 0, g 1,..., g t, K 1,..., K t ). In other words if lim ε 0 (πε 0, g1, ε..., gt ε, K1 ε,..., Kt ε ) = (π 0, g 1,..., g t, K 1,..., K t ) in a suitably chosen topology and p ε t Δ = π ε t 1K ε t π ε t Δ = g ε t p ε t, (3) then lim ε 0 π ε t = π t and lim ε 0 p ε t = p t (again, in a suitably chosen topology). Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

29 Add-on techniques NB. Note that πt ε is no long the solution of a filtering problem, but simply the solution of the iteration (3) Order reduction can be theoretically justified through the continuity of the conditional distribution of the signal on (π 0, g 1,..., g t, K 1,..., K t ). This is the case when the order reduction is performed through a coarsening of the grid used for the numerical algorithm that approximates the dynamical system. Example: We use a Stochastic PDE defined on a coarser grid: t q + (u ) q + (ξ k ) q dbt k = Q rq k=1 u = ψ Δψ = q. ξ k are divergence free given vector fields ξ k are computed from the true solution by using an empirical orthogonal functions (EOFs) procedure Bt k are scalar independent Brownian motions dx t = u(x t, t) dt + i ξ i (x t ) dw i (t). Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

30 Methodology to calibrate the noise The reason for this stochastic parametrization is grounded in solid physical considerations, see D.D. Holm, Variational principles for stochastic fluids, Proc.Roy. Soc. A, dx t = u f t (x t )dt dx t = ut c (x t )dt + ξ i (x t ) dw i (t) For each m = 0, 1,..., M 1 i 1 Solve dxij f (t)/dt = uf t (x ij f f (t)) with initial condition xij (mδt ) = x ij. 2 Compute ut c by low-pass filtering ut f along the trajectory. 3 Compute xij c c (t) by solving dxij (t)/dt = uc t (x ij f (t)) with the same initial condition. 4 Compute the difference Δxij m = xij f c ((m + 1)ΔT ) xij ((m + 1)ΔT ), which measures the error between the fine and coarse trajectory. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

31 Methodology to calibrate the noise Having obtained Δxij m, we would like to extract the basis for the noise. This amounts to a Gaussian model of the form Δxij m = Δx ij + δt N ξij k ΔW m, k where ΔWm k are i.i.d. Normal random variables with mean zero, variance 1. We estimate ξ by minimising 2 Δxij m E Δx N ij ξij k ΔW m k, δt ijm where the choice of N can be informed by using Empirical Orthogonal Function (EOFs). k=1 k=1 Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

32 Methodology to calibrate the noise Number of EoFs decide on a case by case basis too many will slow down the algorithm On the left: Number of EOFs 90% variance vs 50% (no change). On the right: Normalised spectrum of the Δx covariance operator, showing number of eofs required to capture 50%, 70% and 90% of total variance Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

33 Methodology to calibrate the noise Model reduction UQ pictures sytems 512x512 vs 128x128 vs 64x64 (a) ux (b) uy Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

34 Methodology to calibrate the noise (a) psi (b) q Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

35 Methodology to calibrate the noise The aim of the calibration is to capture the statistical properties of the fast fluctuations, rather than the trajectory of the flow. Validation of stochastic parameterisation in terms of uncertainty quantification for the SPDE. Performance of DA algorithm relies on the correct modelling of the unresolved scales. Evaluating the uncertainty arising from the choice of EOFs is be part of the particle filter implementation. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

36 Methodology to calibrate the noise Ensemble Distance from the truth Velocity Field d ({ˆq } ) i, i = 1,..., N p, ω, t := mini {1,...,Np} ω(t) ˆq i (t) L 2 (D) ω(t) L 2 (D) Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

37 Methodology to calibrate the noise Implementation issues Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

38 Implementation issues Number of Particles decide on a case by case basis too few will not give a reasonable solution too many will slow down the algorithm Picture Number of particles 225 (good) vs 500 (no change), 225 (good) vs 25 (less good) 25 seems ok but we want as many as computationally feasible to tune the algorithm (a) psi 225 vs 500 (b) psi 225 vs 25 Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

39 SIR fails Histogram of weights Classical Particle Filter fails! Figure: example: loglikelihoods histogram, period 1 ett, 100 particles Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

40 Framework: The Filtering Problem Tempering For i = 1 to d {X t } t 0 Markov chain P (X t dx t X t 1 = x t 1 ) = f t (x t x t 1 )dx t, {X t, Y t } t 0 P (Y t dy t X t = x t ) = g t (y t x t )dy t reweight the particle using g 1 d t A tempering procedure and (possibly) resample from it move particles using an MCMC that leaves g k d t f t π [0,t 1] invariant Beskos, DC, Jasra, On the stability of SMC methods in high dimensions, Kantas, Beskos, Jasra, Sequential Monte Carlo for inverse problems, Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

41 Tempering Initialisation t=0: For i = 1,..., n, sample q i 0 from π 0. Iteration (t i 1, t i ]: Given the ensemble {X n (t i 1 )} n=1,...,n, 1 Evolve X n (t i 1 ) using the SPDE to obtain X n (t i ). 2 Given X := {X n (t i )} n=1,...,n, define normalised tempered weights ˉλ n,i (φ, X) := exp ( φλ n,i) m exp ( φλ m,i) where the dependence on X means the Λ n,i are computed using X. Define effective sample size Set φ = 1. ESS i (φ, X) := ˉλ i (φ, X) 1 l While ESS i (φ, X) < N threshold do: (a) Find 1 φ < φ < 1 such that ESS i (φ (1 φ), X) N threshold. Resample according to ˉλ n,i (φ (1 φ), X) and apply MCMC if required (i.e. when there are duplicated particles), to obtain a new set of particles X (φ ). Set φ = 1 φ and X = X (φ ). (b) If ESS i N threshold then STOP and go to the next filtering step with {( Xn (t i ), ˉλn,i )} n=1,...,n. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

42 Tempering MCMC Mutation Algorithm Given the ensemble {X n,k (t i )} n=1,...,n corresponding to the k th tempering step with temperatureφ k, and proposal step size ρ [0, 1], repeat the following steps. Propose ( X n (t i ) = G X n (t i 1 ), ρw (t i 1 : t i ; ω) + ) 1 ρ 2 Z (t i 1 : t i ; ω) where X n (t i ) = G (X n (t i 1 ), W (t i 1 : t i ; ω)), and W Z. Accept X n (t i ) with probability ( ˉλ φ k, X ) n (t i ) 1 ˉλ (φ k, X n (t i )) where λ (φ, x) = exp ( φλ(x)) is the unnormalised weight function. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

43 Resampling Intervals Resampling Intervals small resampling intervals lead to an unreasonable increase in the computational effort large resampling intervals make the algorithm fail the ESS can be used as criterion for choosing the resampling time adapted resampling time can be used ESS evolution in time/observation noise Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

44 Results DA Solution for DA periods: 1 ETT and 0.2 ETT (a) ux (b) uy (c) psi (d) q Figure: DA: obs spde, period 0.2 ett Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

45 Results (a) ux (b) uy (c) psi (d) q Figure: DA: obs pde, period 0.2 ett (a) ux (b) uy (c) psi (d) q Figure: DA: obs pde, period 1 ett Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

46 Results Number of tempering steps/average MCMC steps Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

47 . Sequential DA Space-Time Particle Filter Assume that there exists an increasing sequence of sets {A k,j } τ k,d j=1, with A k,1 A k,2 A k,τk,d = {1 : d}, for some integer 0 < τ k,d d, such that we can factorize: τ k,d g(x k, y k )f (x k 1, x k ) = α k,j (y k, x k 1, x k (A k,j )), for appropriate functions α k,j ( ), where x k (A) = {x k (j) : j A} R A. Example: j=1 X n (j) = j 1 β d j+i+1 (X n (i)) + i=1 d ˉβ i j+1 (X n 1 (i)) + ɛ j n i=j Y n (j) = X n (j) + ξ j n where ɛ n (j) i.i.d. N (0, σ 2 x) and ξ n (j) i.i.d. N (0, σ 2 y), j {1,..., d}. Beskos, CD, Jasra, Kamatani, Zhou, A Stable Particle Filter in High-Dimensions, Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

48 . Sequential DA Within a sequential Monte Carlo context, one can think of augmenting the sequence of distributions of increasing dimension X 1:k Y 1:k, 1 k n, moving from R d(k 1) to R dk, with intermediate laws on R d(k 1)+ A k,j, for j = 1,..., τ k,d. This holds when: one can obtain a factorization for the prior term f (x k 1, x k ) by marginalising over subsets of co-ordinates. the likelihood component g(x k, y k ) can be factorized when the model assumes a local dependence structure for the observations. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

49 For j = 1 to τ d 1. Sequential DA Move particle according to q k+1,j (x k+1 (A k+1,j ) x k, x k+1 (A k+1,j 1 )). α k+1,j (y k+1,x k,x k+1 (A k+1,j 1 )) q k+1,j (x k+1 (A k+1,j ) x k,x k+1 (A k+1,j 1 )) weight the particle using and (possibly) resample from it. Remarks. Since particle filters work well with regards to the time parameter (they are sequential), we exploit the model structure in to build up a particle filter in space and time. We break the k-th time-step of the particle filter into τ k,d space-steps and run a system of N independent particle filters for these steps. It is necessary that the factorisation is such that allows for a gradual introduction of the full likelihood term g(x k, y k ) along the τ k,d steps. For instance, trivial choices like α k,j = f (x k 1, x k )dx k (j + 1 : d)/ f (x k 1, x k )dx k (j : d), 1 j d 1, and α k,d = ( f (x k 1, x k )/ ) f (x k 1, x k )dx k (d) g(x k, y k ) are ineffective, as they only introduce the complete likelihood term in the last step. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

50 . Sequential DA Numerical Test Let X n R d be such that we have X 0 = 0 d (the d-dimensional vector of zeros) and j 1 d X n (j) = β d j+i+1 X n (i) + β i j+1 X n 1 (i) + ɛ n i=1 where ɛ n i.i.d. N (0, σ 2 x) and β 1:d are known static parameters. For the observations, we set Y n = X n + ξ n where ξ n (j) i.i.d. N (0, σ 2 y ), j {1,..., d}. Comparison between the SIR algorithm and the STPF. Parameters: σ 2 x = σ 2 y = 1, n = 1000, d-dimensional observations, d {16, 128, 1024}. Both filters use the model transitions as the proposal and the likelihood function as the potential and adaptive resampling. STPF: N = 100 and M d = d SIR algorithm NM d particles. i=j Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

51 . Sequential DA The averages of estimators for the posterior mean of the first co-ordinate X n (1) given all data up to time n are illustrated below. The SIR collapses when the dimension becomes moderate or large. No meaningful estimates when d = 1024 (the estimates completely lose track of the observations and the analytical mean). The STPF performs reasonably well in all three cases. Observation Kalman Filter Estimator Standard Particle Filter Space Time Particle Filter Mean of estimator for X(1) d = 16 d = 128 d = Time Figure: Mean of estimators of X n(1) for across 100 runs. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

52 . Sequential DA The ESS (calculated over global filter for STPF) scaled by the number of particles for each time step of the two algorithms. The standard filter struggles significantly even in the case d = 16 and collapses when d = 128. The performance of the STPF is deteriorating (but not collapsing) when the dimension increases, due to the path degeneracy effect. However, even for d = 1024, it still retains an acceptable level of ESS. Standard Particle Filter d = 16 Space Time Particle Filter ESS (scaled by the number of particles) d = 128 d = Time Figure: Effective Sample Size plots from a single run. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

53 . Sequential DA The variance per time step for the estimators of the posterior mean of the first co-ordinate X n (1) (given the data up to time n) across 100 runs: Standard Particle Filter d = 16 Space Time Particle Filter Variance of estimator for X(1) d = 128 d = Time Figure: Logarithmic scale variance. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

54 . Jittering Jittering Procedure employed to reduce sample degeneracy. The particles are moved using a suitable chosen kernel The moves are controlled so that the size of the (stochastic) perturbation remains of the same order as the particle filter error (1/ N) D. C., Joaquin Miguez, Nested particle filters for online parameter estimation in discrete-time state-space Markov models, D. C., Joaquin Miguez, Uniform convergence over time of a nested particle filtering scheme for recursive parameter estimation in state space Markov models, Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

55 . Final Remarks Particle filters/sequential Monte Carlo methods are theoretically justified algorithms for approximating the state of dynamical systems partially (and noisily) observed. Standard particle filters do NOT work in high dimensions. Properly calibrated and modified, particle filters can be used to solve high dimensional problems (see also the work of Peter Jan van Leeuwen, Roland Potthast, Hans Kunsch). Important parameters: initial condition, number of particle, number of observations, correction times, observation error, etc. One can use methodology to assess the reliability of ensemble forecasting system. Dan Crisan (Imperial College London) Particle Filters in High Dimensions 7-8 June / 55

Particle Filters in High Dimensions

Particle Filters in High Dimensions Particle Filters in High Dimensions Dan Crisan Imperial College London Bayesian Computation for High-Dimensional Statistical Models Opening Workshop 27-31 August 2018 Dan Crisan (Imperial College London)

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Sequential Monte Carlo Methods in High Dimensions

Sequential Monte Carlo Methods in High Dimensions Sequential Monte Carlo Methods in High Dimensions Alexandros Beskos Statistical Science, UCL Oxford, 24th September 2012 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Andrew Stuart Imperial College,

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Particle Filters: Convergence Results and High Dimensions

Particle Filters: Convergence Results and High Dimensions Particle Filters: Convergence Results and High Dimensions Mark Coates mark.coates@mcgill.ca McGill University Department of Electrical and Computer Engineering Montreal, Quebec, Canada Bellairs 2012 Outline

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

Data assimilation as an optimal control problem and applications to UQ

Data assimilation as an optimal control problem and applications to UQ Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Introduction. log p θ (y k y 1:k 1 ), k=1

Introduction. log p θ (y k y 1:k 1 ), k=1 ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE

More information

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz 1 Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz Technische Universität München Fakultät für Mathematik Lehrstuhl für Numerische Mathematik jonas.latz@tum.de November

More information

Data assimilation Schrödinger s perspective

Data assimilation Schrödinger s perspective Data assimilation Schrödinger s perspective Sebastian Reich (www.sfb1294.de) Universität Potsdam/ University of Reading IMS NUS, August 3, 218 Universität Potsdam/ University of Reading 1 Core components

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

EnKF-based particle filters

EnKF-based particle filters EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.

More information

A Note on Auxiliary Particle Filters

A Note on Auxiliary Particle Filters A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,

More information

Approximate Bayesian Computation and Particle Filters

Approximate Bayesian Computation and Particle Filters Approximate Bayesian Computation and Particle Filters Dennis Prangle Reading University 5th February 2014 Introduction Talk is mostly a literature review A few comments on my own ongoing research See Jasra

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop on Filtering, Cambridge Univ., June 14-15th 2010 Preprints (with hyperlinks), joint

More information

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:

More information

Regularization by noise in infinite dimensions

Regularization by noise in infinite dimensions Regularization by noise in infinite dimensions Franco Flandoli, University of Pisa King s College 2017 Franco Flandoli, University of Pisa () Regularization by noise King s College 2017 1 / 33 Plan of

More information

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand Concentration inequalities for Feynman-Kac particle models P. Del Moral INRIA Bordeaux & IMB & CMAP X Journées MAS 2012, SMAI Clermond-Ferrand Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

Divide-and-Conquer Sequential Monte Carlo

Divide-and-Conquer Sequential Monte Carlo Divide-and-Conquer Joint work with: John Aston, Alexandre Bouchard-Côté, Brent Kirkpatrick, Fredrik Lindsten, Christian Næsseth, Thomas Schön University of Warwick a.m.johansen@warwick.ac.uk http://go.warwick.ac.uk/amjohansen/talks/

More information

Sequential Monte Carlo Methods for Bayesian Model Selection in Positron Emission Tomography

Sequential Monte Carlo Methods for Bayesian Model Selection in Positron Emission Tomography Methods for Bayesian Model Selection in Positron Emission Tomography Yan Zhou John A.D. Aston and Adam M. Johansen 6th January 2014 Y. Zhou J. A. D. Aston and A. M. Johansen Outline Positron emission tomography

More information

Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP

Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP Advanced Monte Carlo integration methods P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP MCQMC 2012, Sydney, Sunday Tutorial 12-th 2012 Some hyper-refs Feynman-Kac formulae,

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

The chopthin algorithm for resampling

The chopthin algorithm for resampling The chopthin algorithm for resampling Axel Gandy F. Din-Houn Lau Department of Mathematics, Imperial College London Abstract Resampling is a standard step in particle filters and more generally sequential

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Methods of Data Assimilation and Comparisons for Lagrangian Data

Methods of Data Assimilation and Comparisons for Lagrangian Data Methods of Data Assimilation and Comparisons for Lagrangian Data Chris Jones, Warwick and UNC-CH Kayo Ide, UCLA Andrew Stuart, Jochen Voss, Warwick Guillaume Vernieres, UNC-CH Amarjit Budiraja, UNC-CH

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

An introduction to particle filters

An introduction to particle filters An introduction to particle filters Andreas Svensson Department of Information Technology Uppsala University June 10, 2014 June 10, 2014, 1 / 16 Andreas Svensson - An introduction to particle filters Outline

More information

Adaptive Population Monte Carlo

Adaptive Population Monte Carlo Adaptive Population Monte Carlo Olivier Cappé Centre Nat. de la Recherche Scientifique & Télécom Paris 46 rue Barrault, 75634 Paris cedex 13, France http://www.tsi.enst.fr/~cappe/ Recent Advances in Monte

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

The Hierarchical Particle Filter

The Hierarchical Particle Filter and Arnaud Doucet http://go.warwick.ac.uk/amjohansen/talks MCMSki V Lenzerheide 7th January 2016 Context & Outline Filtering in State-Space Models: SIR Particle Filters [GSS93] Block-Sampling Particle

More information

Monte Carlo Approximation of Monte Carlo Filters

Monte Carlo Approximation of Monte Carlo Filters Monte Carlo Approximation of Monte Carlo Filters Adam M. Johansen et al. Collaborators Include: Arnaud Doucet, Axel Finke, Anthony Lee, Nick Whiteley 7th January 2014 Context & Outline Filtering in State-Space

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

Bayesian Inverse problem, Data assimilation and Localization

Bayesian Inverse problem, Data assimilation and Localization Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is

More information

Superparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows

Superparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows Superparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows June 2013 Outline 1 Filtering Filtering: obtaining the best statistical estimation of a nature

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

A State Space Model for Wind Forecast Correction

A State Space Model for Wind Forecast Correction A State Space Model for Wind Forecast Correction Valrie Monbe, Pierre Ailliot 2, and Anne Cuzol 1 1 Lab-STICC, Université Européenne de Bretagne, France (e-mail: valerie.monbet@univ-ubs.fr, anne.cuzol@univ-ubs.fr)

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

Short tutorial on data assimilation

Short tutorial on data assimilation Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Seminar: Data Assimilation

Seminar: Data Assimilation Seminar: Data Assimilation Jonas Latz, Elisabeth Ullmann Chair of Numerical Mathematics (M2) Technical University of Munich Jonas Latz, Elisabeth Ullmann (TUM) Data Assimilation 1 / 28 Prerequisites Bachelor:

More information

A new iterated filtering algorithm

A new iterated filtering algorithm A new iterated filtering algorithm Edward Ionides University of Michigan, Ann Arbor ionides@umich.edu Statistics and Nonlinear Dynamics in Biology and Medicine Thursday July 31, 2014 Overview 1 Introduction

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge

More information

SMC 2 : an efficient algorithm for sequential analysis of state-space models

SMC 2 : an efficient algorithm for sequential analysis of state-space models SMC 2 : an efficient algorithm for sequential analysis of state-space models N. CHOPIN 1, P.E. JACOB 2, & O. PAPASPILIOPOULOS 3 1 ENSAE-CREST 2 CREST & Université Paris Dauphine, 3 Universitat Pompeu Fabra

More information

Kernel Sequential Monte Carlo

Kernel Sequential Monte Carlo Kernel Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) * equal contribution April 25, 2016 1 / 37 Section

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Bayesian Calibration of Simulators with Structured Discretization Uncertainty

Bayesian Calibration of Simulators with Structured Discretization Uncertainty Bayesian Calibration of Simulators with Structured Discretization Uncertainty Oksana A. Chkrebtii Department of Statistics, The Ohio State University Joint work with Matthew T. Pratola (Statistics, The

More information

Model Uncertainty Quantification for Data Assimilation in partially observed Lorenz 96

Model Uncertainty Quantification for Data Assimilation in partially observed Lorenz 96 Model Uncertainty Quantification for Data Assimilation in partially observed Lorenz 96 Sahani Pathiraja, Peter Jan Van Leeuwen Institut für Mathematik Universität Potsdam With thanks: Sebastian Reich,

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Negative Association, Ordering and Convergence of Resampling Methods

Negative Association, Ordering and Convergence of Resampling Methods Negative Association, Ordering and Convergence of Resampling Methods Nicolas Chopin ENSAE, Paristech (Joint work with Mathieu Gerber and Nick Whiteley, University of Bristol) Resampling schemes: Informal

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,

More information

Imprecise Filtering for Spacecraft Navigation

Imprecise Filtering for Spacecraft Navigation Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

Sequential Monte Carlo Methods for High-Dimensional Inverse Problems: A case study for the Navier-Stokes equations

Sequential Monte Carlo Methods for High-Dimensional Inverse Problems: A case study for the Navier-Stokes equations SIAM/ASA J. UNCERTAINTY QUANTIFICATION Vol. xx, pp. x c xxxx Society for Industrial and Applied Mathematics x x Sequential Monte Carlo Methods for High-Dimensional Inverse Problems: A case study for the

More information

What do we know about EnKF?

What do we know about EnKF? What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant. David Kelly (NYU) EnKF April

More information

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds Michael Rabbat Joint with: Syamantak Datta Gupta and Mark Coates (McGill), and Stephane Blouin (DRDC)

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

Latent state estimation using control theory

Latent state estimation using control theory Latent state estimation using control theory Bert Kappen SNN Donders Institute, Radboud University, Nijmegen Gatsby Unit, UCL London August 3, 7 with Hans Christian Ruiz Bert Kappen Smoothing problem Given

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

A nonparametric ensemble transform method for Bayesian inference

A nonparametric ensemble transform method for Bayesian inference A nonparametric ensemble transform method for Bayesian inference Article Published Version Reich, S. (2013) A nonparametric ensemble transform method for Bayesian inference. SIAM Journal on Scientific

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

A nested sampling particle filter for nonlinear data assimilation

A nested sampling particle filter for nonlinear data assimilation Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. : 14, July 2 A DOI:.2/qj.224 A nested sampling particle filter for nonlinear data assimilation Ahmed H. Elsheikh a,b *, Ibrahim

More information

Bayesian parameter estimation in predictive engineering

Bayesian parameter estimation in predictive engineering Bayesian parameter estimation in predictive engineering Damon McDougall Institute for Computational Engineering and Sciences, UT Austin 14th August 2014 1/27 Motivation Understand physical phenomena Observations

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

A new class of interacting Markov Chain Monte Carlo methods

A new class of interacting Markov Chain Monte Carlo methods A new class of interacting Marov Chain Monte Carlo methods P Del Moral, A Doucet INRIA Bordeaux & UBC Vancouver Worshop on Numerics and Stochastics, Helsini, August 2008 Outline 1 Introduction Stochastic

More information

Lagrangian data assimilation for point vortex systems

Lagrangian data assimilation for point vortex systems JOT J OURNAL OF TURBULENCE http://jot.iop.org/ Lagrangian data assimilation for point vortex systems Kayo Ide 1, Leonid Kuznetsov 2 and Christopher KRTJones 2 1 Department of Atmospheric Sciences and Institute

More information

State Space Models for Wind Forecast Correction

State Space Models for Wind Forecast Correction for Wind Forecast Correction Valérie 1 Pierre Ailliot 2 Anne Cuzol 1 1 Université de Bretagne Sud 2 Université de Brest MAS - 2008/28/08 Outline 1 2 Linear Model : an adaptive bias correction Non Linear

More information

Sequential Monte Carlo samplers for Bayesian DSGE models

Sequential Monte Carlo samplers for Bayesian DSGE models Sequential Monte Carlo samplers for Bayesian DSGE models Drew Creal Department of Econometrics, Vrije Universitiet Amsterdam, NL-8 HV Amsterdam dcreal@feweb.vu.nl August 7 Abstract Bayesian estimation

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

A variational radial basis function approximation for diffusion processes

A variational radial basis function approximation for diffusion processes A variational radial basis function approximation for diffusion processes Michail D. Vrettas, Dan Cornford and Yuan Shen Aston University - Neural Computing Research Group Aston Triangle, Birmingham B4

More information

Tutorial on ABC Algorithms

Tutorial on ABC Algorithms Tutorial on ABC Algorithms Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au July 3, 2014 Notation Model parameter θ with prior π(θ) Likelihood is f(ý θ) with observed

More information

LS-N-IPS: AN IMPROVEMENT OF PARTICLE FILTERS BY MEANS OF LOCAL SEARCH

LS-N-IPS: AN IMPROVEMENT OF PARTICLE FILTERS BY MEANS OF LOCAL SEARCH LS--IPS: A IMPROVEMET OF PARTICLE FILTERS BY MEAS OF LOCAL SEARCH Péter Torma Csaba Szepesvári Eötvös Loránd University, Budapest Mindmaker Ltd. H-1121 Budapest, Konkoly Thege Miklós út 29-33. Abstract:

More information

Lecture Particle Filters

Lecture Particle Filters FMS161/MASM18 Financial Statistics November 29, 2010 Monte Carlo filters The filter recursions could only be solved for HMMs and for linear, Gaussian models. Idea: Approximate any model with a HMM. Replace

More information

EnKF and filter divergence

EnKF and filter divergence EnKF and filter divergence David Kelly Andrew Stuart Kody Law Courant Institute New York University New York, NY dtbkelly.com December 12, 2014 Applied and computational mathematics seminar, NIST. David

More information

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine

Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine Mike Tipping Gaussian prior Marginal prior: single α Independent α Cambridge, UK Lecture 3: Overview

More information

LTCC: Advanced Computational Methods in Statistics

LTCC: Advanced Computational Methods in Statistics LTCC: Advanced Computational Methods in Statistics Advanced Particle Methods & Parameter estimation for HMMs N. Kantas Notes at http://wwwf.imperial.ac.uk/~nkantas/notes4ltcc.pdf Slides at http://wwwf.imperial.ac.uk/~nkantas/slides4.pdf

More information

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering With Polynomial Chaos Raktim Bhattacharya Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering with PC Problem Setup. Dynamics: ẋ = f(x, ) Sensor Model: ỹ = h(x)

More information