The Diffusion Kernel Filter. Applied to Lagrangian Data Assimilation

Size: px
Start display at page:

Download "The Diffusion Kernel Filter. Applied to Lagrangian Data Assimilation"

Transcription

1 The Diffusion Kernel Filter Applied to Lagrangian Data Assimilation Paul Krause 1 Juan M. Restrepo 1,2 1 Department of Mathematics University of Arizona Tucson, AZ U.S.A. 2 Department of Physics University of Arizona Tucson, AZ U.S.A. November 22, 2008 Corresponding Author: restrepo@math.arizona.edu

2 Abstract The Diffusion Kernel Filter is a sequential particle-method approach to data assimilation of time series data and evolutionary models. The method is applicable to nonlinear/non-gaussian problems. The method is arrived at by a parametrization of small fluctuations of Brownian-driven paths about a deterministic path. Its implementation is relatively straightforward, provided a tangent linear model is available. A by-product of the parametrization is a bound on the sup-norm of the covariance matrix of such fluctuations. This norm can be put to good use, independent of the filter itself: among other things, this norm can quantify the information entropy of the branches of prediction, and thus can be used to define quantitatively a notion of predictability. Here we use it to quantify the degree of complexity in an estimation problem. In pure oceanic Lagrangian data assimilation the dynamics and the statistics are nonlinear and non-gaussian, respectively. Both of these characteristics challenge greatly linearized Gaussian estimation methods, such as extended Kalman filters. The Diffusion Kernel Filter is proposed as an alternative and is evaluated here on a problem that is often used as a testbed for Lagrangian data assimilation: The problem consists of tracking point vortices and passive drifters, using a dynamical model and data, both of which have known error statistics. We find that the Diffusion Kernel Filter captured the first three moments of the optimal history, with a computational cost that is competitive with a particle filter estimation strategy, for moderate dimensions. We also introduce a clustered version of the Diffusion Kernel Filter, which 2

3 we show is significantly more efficient with regard to computational costs, with little to no appreciable deterioration in the estimates of the first three moments. Parallelizing branches of prediction, cdkf can be computationally competitive with EKF, yet capable of handling non-gaussian problems. keyword: data assimilation, filter, Lagrangian data assimilation, particle method, diffusion kernel filter. PACS 94.05sx, Ry, Wc, e, j, r

4 1 Introduction The Diffusion Kernel Filter (DKF) is a particle-based estimation strategy. It can be applied to problems in which a evolutionary model and a time series representing data are to be assimilated; the filter estimates the first few moments of the dynamical history, conditioned on observations. The model and the observations will in general have inherent errors; for example, the error can represent unresolved physical processes. The statistics of the error terms are presumed known. DKF does not assume linearity in the dynamics or Gaussianity in the statistics. The method was introduced by Krause (2008). It is applicable to small diffusion processes and was derived by reformulation of the Ito nonlinear SODE (stochastic ordinary differential equation) problem for diffusion processes into a Liouville SPDE (stochastic partial differential equation) problem, application of Duhamel s principle to this problem, restriction of the resulting to nonlinear SODE open problems for the flows of branches of prediction, closure of these. This was inspired from Chorin et al. (2002), where a similar technique is used to tackle the dimension reduction problem for the dynamics of a nonlinear ODE (ordinary differential equation). The data assimilation problem and a description of DKF appear in Sections 2 and 3, respectively. The algorithmic aspects are covered in Section 4. The state vector for which an estimator is sought can consist of dynamic quantities (e.g., physical variables) as well as parameters. Hence the dimension of the state variable and that of the dynamic variable may differ. The dimension of the measurement vector may be different from that of the state vector as well. 4

5 For linear dynamics and Gaussian error statistics an optimal smoother of the history and its uncertainty is provided by variance-minimizing leastsquares, the variational data assimilation approach or the Kalman filter/smoother (see Wunsch (1996) for details on these). Two commonly used techniques in nonlinear and weakly non-gaussian contexts are the extended Kalman filter/smoother (EKF/S), and the ensemble Kalman filter/smoother (enkf/s). The extended Kalman filter/smoother uses a linear forecast with a linear analysis technique. The ensemble Kalman Filter, on the other hand, makes a nonlinear forecast, but retains the use of the linear analysis. (see Evensen (1997) and references contained therein). Ensemble Kalman approaches and the variational approach (4DVar) are presently being evaluated in operational forecast models for weather and climate (see Gustafsson (2007) and references contained therein). There are a number of data assimilation strategies which specifically target problems in which nonlinearity and/or non-gaussianity pose significant assimilation challenges. Among them are: the optimal approach of Kushner (1962) (see as well Kushner (1967b), Kushner (1967a). Also, see Stratonovich (1960) and Pardoux (1982)); the mean-field variational strategy of Eyink et al. (2004) (see also Eyink and Restrepo (2000), and Eyink et al. (2002)); particle methods, such as those alluded to in Leeuwen (2003) and Kim et al. (2003) (see Arulampalam et al. (2002) for a tutorial); direct Monte Carlo sampling (see Pham (2001)); and the Langevin approaches, such as that of Hairer et al. (2007), path-integral sampling-based method of Alexander et al. (2005); Restrepo (2008). A common trait of the methods specifically developed for nonlinear/non-gaussian problems is that they are computationally intensive and thus impractical for problems in which the state vector is large in dimension. This computational constraint, however, 5

6 should not be construed as a practical failure; not every time dependent estimation problem of interest has dimensions comparable to those of a weather forecasting problem, for example. If one sidesteps the issue of statistical convergence of the estimates, a remarkable thing about EKF and 4DVAR is that these estimation methods work better than one would think is possible on temporal, data-rich problems, that are statistically weakly nonlinear, i.e. close to Gaussian. (At least in controlled numerical experiments and under the stipulation that only the mean and the variance are to be examined. See Gilmour et al. (2001), Lawless et al. (2005) for relevant discussions). Notwithstanding, it is not surprising that linear methods are expected to produce poor estimates (for example, not be capable of minimizing the variance estimate), or even outright failing to obtain the first moment, on problems in which nonlinearity and/or non-gaussianity are important. Assimilation failure can be exacerbated when there are very few observations in time or the uncertainty in the data is sufficiently high. For these problems it is immaterial whether the quasi-linear methods are more efficient computationally than their nonlinear/non-gaussian counterparts. The larger challenge posed by using EKF and 4DVAR in problems that test their limitations is that there are few ways to obtain quantitative evaluations of the quality of their estimates, and in this regard the utility of optimal and nearoptimal nonlinear/non-gaussian estimation methods cannot be overemphasized. Hence, the nonlinear/non-gaussian methods alluded above can play a practical and important role in large nonlinear non-gaussian problems: these near-optimal methods can be used to critically benchmark the results of linearized operational methods, for which optimality bounds are unavailable. 6

7 A type of problem for which DKF strategy is well suited is the highly nonlinear and non-gaussian problem of oceanic (or atmospheric) Lagrangian estimation. In Section 5 we will show how DKF performs on a problem that has been frequently used to test Lagrangian estimation ideas; we revisit the very same problem considered by Ide et al. (2002) (see also Kuznetsov et al. (2003), and Özgökmen et al. (2000)), which is used often as a test case for what they refer to as oceanic Lagrangian data assimilation. The specific problem is the estimation of the coupled dynamics of passive drifters and a fluid flow, complemented by partial or complete measurements of both drifters and the flow. Rather than using mixed Eulerian/Lagrangian frames they proposed a purely Lagrangian framework for the estimation of the mean history and uncertainty of the drifters and the flow. The purely Lagrangian framework is inspired by real and practical circumstances: this type of approach answers the need to develop assimilation methodologies that are applicable to the current trend in oceanic data-gathering strategies: the oceans are presently, and will be in the near future, measured by a variety of active and passive platforms and drifters, which themselves have possibly complex path-following dynamics. Furthermore, an implication from the work of Kuznetsov and collaborators is that assimilation of drifters and tracers in many complex flows can be performed by recasting or approximating the flow in terms of interacting point vortices or other coherent structures using a combination of path-following data assimilation and powerful dynamical systems theory to gain insights and understanding of flow problems of potential geophysical significance. If estimation techniques are to be used to complement the analysis of these types of problems, or deal sensibly with their sensitivity to initial conditions, 7

8 an estimation technique that can handle nonlinear dynamics and/or non- Gaussian statistics would be needed. The specific estimation technique adopted by Kuznetsov and collaborators to the Lagrangian data assimilation problem is known in the control theory community as the constrained extended Kalman Filter (EKF) technique (see Simon and Chia (2002), and references contained therein). An attractive feature of this strategy is its computational efficiency, specifically with regard to handling the covariance matrix, when the dimensions of the state space become large (see Kuznetsov et al. (2003)). In Section 5 we will compare their EKF, the proposed DKF and a bootstrap filter estimate. Proposing DKF as an alternative to the EKF is aimed at circumventing the propensity of the EKF methodology to fail on nonlinear and non- Gaussian problems, such as the drifter/flow problem featured in this study. While Kuznetsov and collaborators were aware of the failures of their EKFbased filtering scheme they were tentative with regard to how and why their method failed: they did not compare the results of their filtering scheme to those of an optimal estimator. We will do so here, however. A comparison of the EKF results to the bootstrap filter is included here and used to shed light on this issue. The bootstrap filter estimate is a benchmark because it is proven to be convergent (see Crisan et al. (1999)). We suggest DKF, which we will show is reasonably accurate, as a replacement to the EKF in order to make Lagrangian data assimilation a robust and viable assimilation technique. Its advantage over other optimal techniques, such as the technique of Kushner (1962), or near-optimal ones, such as those by Restrepo (2008) or Eyink et al. (2004), is that DKF is amenable to a clustering that reduces the computational overhead significantly, thus permitting 8

9 consideration of significantly larger state estimation problems. In Section 7 we discuss the computational expense incurred in DKF. We show that DKF is competitive with the bootstrap filter. By clustering along branches of prediction the DKF can be made significantly more efficient computationally, with little deterioration on the estimates. In Section 6 we describe this alternative implementation, which we call the clustered DKF or cdkf. The DKF is built upon a diffusion kernel. This kernel has several attractive properties which can be exploited practically, independent of the filtering methodology proposed here. For example its norm can be used to define a notion of predictability; the information entropy of branches of predictions will be estimated using the uncertainty norm and here it will be applied to assess the complexity of the dynamics, in regards to predictability. The uncertainty norm is presented in Section 3, and applied in Section 5. 2 Problem Statement Considered here is the determination of the statistics of the state variable x(t), whose dimension is N x, given incomplete and possibly imprecise observations of that system. Here t is an indexing parameter, taken to represent time. The state vector x is assumed to satisfy dx(t) = f(x(t))dt + gdw, t > t 0, x(t 0 ) = x 0, (1) where g := (2D) 1/2 (x, t) is the covariance matrix of dimension N x N x. The 9

10 deterministic part of the dynamics of x is given by f. The term dw is a Wiener incremental process of dimension N x with independent components. The covariance of the stochastic component of the model is assumed known, as is the distribution of the initial conditions x 0. The situation featured here is the stochastic differential equation (SDE) with additive noise. Stochasticity might be inherent in the system dynamics and/or might arise from parametrizations of unknown or unresolved physics, or from ignored degrees of freedom in the dynamics. Observations at discrete times are denoted by the N y -dimensional vector y(t k ) y k, k = 1, 2,.., K, t 0 t k t f, where k labels each observation. The relationship between the observations and the state variable at different times is given by y k = h(x k ) + ǫ k, (2) where h : R Nx R Ny, and ǫ k is an N y -dimensional zero-mean noise vector with a known statistical distribution. We will use a Gaussian distribution for the measurement errors in the computation examples presented later. The filtering problem consists of sampling the probability density p(x y k ), k = 1, 2,.., K. The Bayesian approach to sampling p(x y k ) consists of using the definition of the conditional probability density: p(x y k ) = p(y k x) p(x). (3) p(y k ) 10

11 The Diffusion Kernel will be used to propose the Diffusion Kernel Filter (DKF), a filtering strategy. This is done here within a particle filter framework. The filtering strategy thus consists of obtaining samples of the contitional probability at t k sequentially, using samples of the probability at t k 1. 3 The Diffusion Kernel Let Φ j (ξ(t k, i); t), t k t t k+1, j = 1,..., J k,i, be a set of sample paths that solve dx = f(x) dt + g(x)dw, t [t k, t k+1 ], (4) x(t k ) = ξ(t k, i) R Nx, (5) where the ξ(t k, i) are samples of the conditional density at time t k. We decompose Φ = φ + Φ, where φ(ξ(t k, i); t) is the deterministic path with initial value ξ(t k, i) that satisfies dx dt = f(x), t [t k, t k+1 ]. This decomposition is illustrated in Figure 1. As is summarized in Appendix A (see Krause (2008) for details), where t Φ (ξ(t k, i), t) φ(ξ(t k, i), t) g(φ(ξ(t k, i), s)) dw(s t k ) t k = t t k G(ξ(t k, i), t, s) dw(s t k ), G(ξ(t k, i), t, s) := φ(ξ(t k, i), t) g(φ(ξ(t k, i), s)) (6) 11

12 is the Diffusion Kernel. If the diffusion matrix happens to be a constant, g = g 0 say, then Φ (ξ(t k, i), t) φ(ξ(t k, i), t) g 0 w(t t k ). In the numerical examples presented later this will be the situation encountered. 3.1 The Tangent Linear Model The equation for φ is d φ dt where I is the N x N x identity matrix. = f(φ) φ, t [t k, t k+1 ], φ = I, (7) 3.2 The Uncertainty Norm Two aspects of the diffusion kernel will be discussed later, namely that it can be used to obtain a bound on the covariance matrix of the fluctuation in what we will call branches of prediction, and that it can be used to quantitatively define a notion of prediction. In both of these the norm of the diffusion kernel is used. We call G (t) the Uncertainty Norm for branches of prediction. It is defined mathematically as t G (t) := max E[G r (ξ(t k, i), t, s)g r=1,...,n x r(ξ(t k, i), t, s)] ds, t k where G r and G r refer to a row of G and its conjugate transpose. The uncertainty norm was shown by Krause (2008) to bound the sup-norm of the 12

13 covariance matrix of Φ (ξ(t k, i), t): Cov(Φ ) G 2, t k t t k+1. (8) This result holds as long as Φ (ξ(t k, t) is small enough. In the special case where the diffusion matrix is constant, g = g 0, one obtains G (t) = t t k max ( φ g 0 ) r 2. (9) r=1,...,n x The key observation is that Cov(Φ ), a measure of entropy of the branch, will be small whenever G is small; that is, for small diffusion kernels within branches of prediction. 4 The Diffusion Kernel Filter (DKF) The aim is to sample the random variable Φ(x) (y 1,..,y k )(t), k = 1,..., K. for 0 t t f, and with these samples compute approximations to moments of the variable. Since we will be approximating solutions of the stochastic differential equation (4) by a time discretization, the sample distribution will be obtained on a grid in t. It suffices to describe how the algorithm proceeds between t k and t k+1, the filtering times, where 0 t k t f. Let t be the time interval between filtering times, for simplicty taken here as constant. Let I denote the total number of sample paths, over all branches of prediction. This number is fixed in what follows, over each t. In the time interval [t k, t k+1 ], each branch of prediction i = 1, 2,..., I k, in turn, is composed of J k,i sample paths, so that I = I k i=1 J k,i. 13

14 The DKF may be built upon the structure of the bootstrap particle filter, using the diffusion kernel. See Gordon et al. (1993) for the specific particle filter algorithm, the bootstrap particle filter, we implement and we modify to create the DKF. (See Arulampalam et al. (2002) for a tutorial on particle filters). The choice of the bootstrap particle filter was dictated by simplicity in presentation, however, it is not difficult to imagine that the diffusion kernel filter could have been designed around a more complex particle filter. The estimates derived from the bootstrap particle filter will be used in the example calculations that appear later on to diagnose the quality of the estimates given by the DKF and EKF (as implemented by Kusnetzov and collaborators). The particle filter offers a benchmark because it is known to converge statistically for linear or nonlinear problems with Gaussian and non-gaussian statistics (see Crisan et al. (1999)). For simplicity, we will specialize in what follows to models whose g is a constant matrix or a matrix that depends on time t and the vector Wiener process w(t t k ) such that the t t k g(s,w(s t k ))dw(s t k ) can be computed analytically, as a function of time. This precludes cases in which the history of this process has to be sampled. Bootstrap Filter Algorithm 1. Samples of the prior distribution at t k+1 are denoted by Φ j (ξ(t k, i), t k+1 ). They are obtained by numerically solving (4) with initial condition x(t k ) = ξ(t k, i), (10) J k,i times. Here i = 1,..., I k. 2. Compute p(y k+1 x (i,j) )(t k+1 ) for x (i,j) := Φ j (ξ(t k, i), t k+1 ) using (2) 14

15 (the distribution of ǫ is assumed known). Normalizing, (t k+1, i, j) := p(y k+1 x (i,j) )(t k+1 )/ ı, j p(y k+1 x (ı,j) )(t k+1 ), (11) we obtain the weighting. 3. Combining Φ j (ξ(t k, i), t k+1 ) and (t k+1, i, j) one obtains weighted samples of the posterior p(x y k+1 )(t k+1 ) in accordance with (3). 4. Draw I samples from the discrete random variable (i, j) with mass function (t k+1, i, j). Let (i, j ) be these samples, without repetition, and J (i,j ) the number of times they are repeated. Re-label the sets Φ j (ξ(t k, i ), t k+1 ) and J (i,j ) as ξ(t k+1, i) and J k,i, for i = 1, 2,..., I k The process is then repeated in the next filtering time interval thus obtaining a sequential algorithm. The DKF algorithm introduces an approximation to the bootstrap particle filter at the level of the stochastic dynamics in the prediction step: by linearizing the equations and solving them for several branches of prediction, the nonlinear dynamics are approximated by a sum of Gaussian branches of prediction (which is valid for short prediction steps). DKF Algorithm 1. For t k t t k+1, (7) is solved, for i = 1, 2,..., I k, in order to obtain φ(ξ(t k, i), t k+1 ). 2. For t k t t k+1, (4)-(5) is solved, with dw set to zero, and a faster deterministic ODE integrator to obtain φ(ξ(t k, i), t k+1 ). 15

16 3. At time t k+1 draw J k,i samples Φ (i,j) from Φ (ξ(t k, i), t k+1 ) := φ(ξ(t k, i), t k+1 ) tk+1 t k g(s,w(s t k ))dw(s t k ). (12) Note that if g is constant the integral is a simple product. 4. Write Φ j (ξ(t k, i), t k+1 ) = φ(ξ(t k, i), t k+1 ) + Φ (i,j), for i = 1,..., I k, and j = 1,..., J k,i, thus obtaining I samples of the prior probability density p(x)(t k+1 ). 5. Proceed as in steps 2-5 of Bootstrap Filter Algorithm. A characteristic of typical explicit or semi-implicit stochastic differential equation integration schemes is that they generally require significantly smaller time steps for the equation, as compared to a similar deterministic integrator of the differential equation which does not have a stochastic/diffusion term. This has a bearing on the estimate of the computational cost of DKF, as compared to the bootstrap filter. 5 Example Calculations The data assimilation example consists of producing an estimate for the first few moments of the noisy history of N p passive drifters and N v point vortices, conditioned on noisy observational data of a subset or all drifters and vortices. The underlying deterministic dynamical system for this problem is well understood. (Friedrichs (1966) gives a detailed derivation and analysis of this system of equations). Its noisy counterpart, as proposed as a data assimilation problem, has been used by Kuznetsov et al. (2003) and Ide et al. 16

17 (2002) as a testbed for a Lagrangian data assimilation scheme. The specific estimation scheme within their Lagrangian data assimilation methodology is a constrained extended Kalman filter (EKF), which enables them to write the covariance matrix for the system in compact form. We will contrast their EKF to the DKF proposed here. Furthermore, we will compare outcomes of both of these assimilation schemes to bootstrap particle filter estimates, which is taken here as the benchmark. In compact form, the dynamics of the m th point vortex with space coordinates (p m (t), q m (t)) at time t can be written in terms of complex coordinates z m (t) := p m (t) + iq m (t) as dz m dt = i 2π N v l=1,l m Γ l z m z l + η Vm (t), (13) where Γ m is the vortex strength (all of them assumed hereon to be equal to 2π). The dynamics of the n th passive tracer, with position coordinates (v n (t), w n (t)) is given compactly in terms of ζ n (t) := v n (t) + iw n (t), as dζ n dt = i 2π N v l=1 Γ l ζ n z l The star superscript denotes complex conjugate. + η Pn (t). (14) In the above equations, η Vm = η (p) V m + iη (q) V m, η Pn = η (p) P n + iη (q) P n are stochastic terms. These may represent unresolved processes, for example, and are assumed known. They will be taken here as being independent white noise processes over time. Scaling spatial coordinates (x, y) 2/l (x, y) and time as t 2Γ/πl 2 t, where l is the typical Euclidean distance between the vortices (say, the distance between them at t = 0), would recover our parameters. Figure 2 illustrates what the model, in the absense of noise, produces for paths of the vortices and the drifter. 17

18 5.1 Comparison of EKF, DKF, and Bootstrap Results In these calculations N p = 1, and N v = 2, therefore, the resulting dimension of the state variable is 6. Like Kuznetsov et al. (2003), the drifter is the only observation stream providing data in the assimilation. This data is a numerically-generated sample orbit solution of (13)-(14). For the space coordinates of the vortices and drifters we chose E[η ( ) ( ) (t)η( ) ( ) (t )] = σ 2 δ(t t ), where σ 2 is constant, in the example calculations that follow. The (.) symbol denotes the same component (X or Y ) of the vectors η Pn or η Vm. The components are taken to be independent and the stochastic terms of the drifters and the vortices are assumed uncorrelated. In (2), we assume the noise is Gaussian with zero mean and variance E[ǫ (.) (t)ǫ (.) (t )] = ρ 2 δ(t t ), where ρ 2 is constant. Again, the symbol (.) denotes a single component of the measurement error vector. The model and observation variances scale as σ Γ/2π σ and ρ = 2/l ρ. Kuznetsov et al. (2003) report that the (constrained) extended Kalman filter (EKF) applied to the Lagrangian-frame data assimilation problem with model dynamics given by (13)-(14) yields generally good results. They attribute some failures of their EKF filter, as applied to this model, to specific dynamic configurations (see (Griffa et al., 2007, p. 222), and Kuznetsov et al. (2003)) or to the inherent non-gaussianity present, especially due to saddle bifurcations. Their quantitative measure of estimate quality consisted in comparing the distance between the estimated mean history and the truth 18

19 to a reasonable threshold of distance. By this measure they report good results in a variety of parameter combinations. The EKF results were obtained using our own implementation of the standard algorithm. We also implement a bootstrap filter, whose estimates we will use to measure the quality of the estimates produced by DKF as well as EKF. We will also compare higher order moments of the DKF, EKF and bootstrap. With an optimal estimate and a comparison of the higher moments we thus have far better ways of assessing the quality of the sub-optimal methods, EKF and DKF, than the distance metric used by Kuznetsov et al. (2003). The parameters for the simulations that follow were: the final time t f = 20, the time discretization was δt = The interval between filtering times was set to 0.5. The standard deviation parameters are σ = , and ρ = 0.1, respectively. The initial conditions for the vortices were: z 1 = ( 0.8, 0.2), z 2 = (0.5, 0.). The drifter initial condition was ζ 1 = (0.77, 0.33). It is should be pointed out that this initial position does not coincide with the true position; it is seldom the case that one has initial position information in practice. Figure 3 shows the superposition of the mean path of the drifter, as estimated by EKF, the benchmark bootstrap particle filter, and by DKF. The DKF estimate falls nearly coincidentally with the benchmark filter estimate, however, the EKF does not. Figure 4 shows the second and third central moments of one of the components of the state vector, corresponding to one of the coordinates of the drifter, as estimated by DKF and the bootstrap filter. Similar estimate quality is evidenced in the other components of the state vector. The EKF second central moment is not plotted, since it was so far from the benchmark 19

20 moment. Figure 5a superimposes the average estimate 2-norm errors and prediction 2-norm errors obtained from the bootstrap filter and DKF, showing how they compare to each other. Superimposed as well is the DKF average entropy prediction and maximal likelihood prediction. The DKF average entropy and maximal likelihood predictions are defined as follows: the average entropy prediction is the deterministic path associated with the branch of prediction whose final entropy is closest to the average final entropy over all branches; the maximal likelihood prediction is the deterministic path obtained by starting from the most likely sample. (Miller et al. (1999) examine the issue of prediction in the context of filtering of a stochastic Lorenz equation and a truncated barotropic fluid model using standard pdf-based definitions). In this particular example these two predictors agree with each other and agree as well with the DKF average estimate. This is not the general expectation in problems with more complex dynamics. Figure 5b shows the maximum value of the uncertainty norm over all branches of prediction, as a function of time. Of note is the relatively simple time-dependent structure of this norm in this problem. In fact, we found that the uncertainty norm was similar in all branches of prediction, at any time; this outcome is not generally expected for more complex random dynamics. A clear implication of this outcome is that EKF is failing on a problem whose dynamics are not all that complex (in spite of saddle bifurcations). In Figure 6 we highlight the drifter estimate history in the interval t = 0 to t = 4, corresponding to an initial time of Figure 3. It shows that DKF and EKF track each other, closely until about t = 3, then the two estimates start diverging. In Figure 6a the two estimates and the truth, as well as the 20

21 isolated data, are shown just prior to filtering at t = 3.5. Figure 6b shows the prediction immediately following filtering. The EKF corrects its path, though it fails to track the benchmark. At time t = 4, prior to filtering, the paths diverge a great deal. The histories are not going to agree beyond this time. Figure 6c-d show irretrievable incompatibility between EKF and the benchmark and DKF. Up to this time there are no issues with regard to saddle bifurcations or proximity of the drifter to vortex centers that will numerically-induce errors. Kuznetsov et al attribute these as the causes of failure of their EKF. Since these events are rare, the implication is also that EKF fails only rarely. It should be remembered that the forecast in EKF is linearized, and presuming the times steps are sufficiently small (here 0.001), tracking the actual forecast is not too challenging in this problem. Figure 7 shows that after about t = 2 the third moment (of the drifter, in this case) becomes significant and thus non-gaussian. This suggests that the problem with EKF in this particular dynamic problem is that its analysis step assumes Gaussianity, an assumption whose validity becomes more tenuous as time progresses. This means that failures are not rare, but instead, frequent. It also means that a method such as ensemble Kalman Filtering would also fail, since its analysis step also assumes Gaussianity in the statistics. In Figure 6 we notice that data, represented by the dots, has little influence on the bootstrap filter estimate of the drifter path. The drifter samples are thrown far away from the data given their sensitivity to the vortices locations. 21

22 6 The Clustered DKF (cdkf) We introduce a further development of DKF we will call a clustered DKF (cdkf). The cdkf is a cost-reducing implementation of the full DKF. The computational cost reduction in cdkf comes from using fewer samples to describe the posterior, which implies using fewer branches in the prediction step. Briefly, the idea behind cdkf is to apply the formula for Φ at filtering times to draw from a few branches of prediction the required number of samples; these samples are then decimated into cluster representatives. The weights associated with these representative samples are taken to be the sum of the weights of the members of their respective cluster, so that the number of paths that would emanate from each cluster is preserved. A simple, but non-optimal, procedure for this in Lagrangian data assimilation is to partition the interval [ 1, I] of likelihood weights into I k subintervals, which amounts to partitioning the whole configuration space into I k cells based on a partitioning of the drifters space (remark that p(y k+1 x (i,j) )(t k+1 ) is a distance in the drifters space, up to the exponentiation), which embodies the most unstable directions. In each subinterval the weights are then added in order to compute the effective weight of representative samples associated with each subinterval. I k is chosen to be sufficiently large so that the samples effectively represent each cluster. To illustrate how well the cdkf performs we took t f = 20, N p = 4, and N v = 4 (thus the state variable dimension was 16). The observations comprise the position of the 4 drifters. The uncertainty parameters are σ = and ρ = Figures 8 and 9 summarize the results, comparing 22

23 the cdkf, the DKF, and the bootstrap filter estimates. The figure features the outcomes for one of the components of the state vector, however, we found that the other components were similarly well captured by DKF and cdkf as compared to the bootstrap filter results. Although not shown, we found that the uncertainty norm did not display complex time dependent behavior in this higher dimensional problem (Krause (2008) applied DKF to the Lorenz equations and there it was found that the norm reflects the dynamical complexity). The DKF tracked the bootstrap filter well in that particular example. 7 Computational Cost of DKF and cdkf A reasonable comparison of costs is to contrast the computational complexity of DKF and cdkf with those of the bootstrap filter cost. The cost of computing the bootstrap filter estimate from t k to t k+1 is dominated by CβαT N 2 x I, (15) which is the cost of computing the noise term. C is an implementation constant, I is the number of sample paths, and N x is the dimension of the state variable. Typical of the calculations performed here I = T is the number of time steps required in the deterministic ordinary differential equation integrator. αt is the number of times steps taken in the stochastic differential equation integrator, where α 1; β > 1 is associated with the computational overhead inherent in the particular numerical stochastic integrator chosen as compared to the same order deterministic ordinary differential equation integrator. The time step in the SODE in the bootstrap 23

24 was in the order of 10 4, the time integrator for the deterministic problem, on the other hand, could be made 100 times larger; in general α can be a large number. Relevant to this estimate is that many explicit numerical integrators for stochastic differential equations are such that increasing β decreases α and vice versa. The SODE and ODE schemes used here were the stochastic Heun and its deterministic counterpart. The cost of computing the DKF in the same time interval includes a contribution from having to time-integrate the deterministic nonlinear equation and the TLM equation. Thus, T (C N x + CNx) 3 I k T CNx 3 I k. (16) C is an implementation constant of the same order as C. Here we are assuming that the second term dominates the estimate. Comparing (15) and (16) we can find the conditions whereby the cost of the bootstrap filter exceeds that of the DKF: N x < αβ I. I k This reflects the impact of non-gaussianity in the cost of the DKF: the less Gaussian the assimilation becomes, the closer I/I k is to 1; the worst scenario would be a uniform distribution of likelihood weights. Our experience with the cdkf is that one can take I k to be at most a fixed fraction of I, say I k 10 r I, r 1, without suffering a serious deterioration on the estimate of the first three moments. As such, a conservative estimate for cdkf cost exceedance over the bootstrap filter would be N x < αβ10 r, where r is related to the phase space configuration complexity. 24

25 In the computational examples illustrated above involving the 4 vortices and 4 drifters, we used r = 2. In the prediction steps the cost of DKF (or cdkf) is the same as EKF times the number of branches of prediction solved, provided the EKF forward propagator can be solved with the same time step as in DKF (being nonlinear, the Riccati equation involved in the EKF may demand much smaller steps than the TLM equation in the DKF or cdkf). The cost of the DKF analysis step when g is a constant matrix or g(t), is in the order of Nx 2 times the number of samples I. For a random g(w) the cost is still order Nx 2 times I: the integral of g is computed/approximated yielding a vector, the vector/matrix multiply is now the same as before. The computational cost in the analysis step in EKF is the smaller of order Nx 3 or order Ny. 3 8 Conclusions The Diffusion Kernel Filter (DKF) is a data assimilation method of general applicability in nonlinear/non-gaussian sequential estimation problems. The method is arrived at by a parametrization of small fluctuations of Browniandriven paths about a deterministic path. We demonstrate, by example, that the method is robust: we get very good agreement, up to the third moment, between the DKF and a benchmark particle filter in a variety of dynamic problems. The DKF is relatively easy to implement, provided a tangent linear model is available. In applications, the tangent linear model can be found using automatic differentiation. The filter is obtained from a small-covariance expansion of sample paths, within branches of prediction, expressing these in terms of the Diffusion Ker- 25

26 nel. The Diffusion Kernel is the matrix product of the tangent operator and the diffusion matrix of the stochastic differential equation. The norm of this kernel gives a bound on the covariance matrix of the fluctuating component of the sample paths, and indirectly, on the information entropy within the branches of prediction. As such it can be used to define a notion of prediction itself. It can also be used to assess sensitivity of the deterministic history to Brownian noise. The computational complexity of DKF is competitive with particle filter methods for moderately-sized problems. A variation of the DKF method is the clustered DKF or cdkf. It is significantly more efficient than DKF. It derives its efficiency from using fewer samples to describe the posterior, which implies using fewer branches in the prediction step. The increase in efficiency comes with a slight degradation in the quality of the first few moment estimates of the dynamical history. The cdkf scheme has another computationally-attractive feature: since the maximum number of branches of prediction I k is fixed from the outset, it is possible to distribute the load and fix the memory allocation among different processors in a multi-processor implementation of the method. This clearly is an advantage over the situation in which communication and memory management would have to take place in the course of filtering: this is in fact a challenge to the parallel implementations of a bootstrap filter. With parallelism cdkf can be computationally competitive with EKF yet capable of handling non-gaussian problems. The DKF and cdkf may be viable alternatives to EKF in Lagrangian data assimilation for some state estimation problems, exceeding the practical constraints posed by computational expense of particle filter assimilation 26

27 strategies. In order to illustrate DKF and compare its estimates to other filters we chose a purely Lagrangian oceanic problem that was considered by Kuznetsov et al. (2003) and has been used to test several filtering methodologies c.f., Alexander et al. (2005). The problem consists of interacting point vortices and passive drifters. The data consists of a sequence of observations of the drifter portion of the state vector. The first three moments produced by DKF were benchmarked against the bootstrap filter and found to agree very well under a variety of system configurations and estimation parameters. We also compared the estimates of DKF with those of cdkf and found that the latter estimates had a marginal deterioration, producing results at a fraction of the DKF cost. The parameters for the two examples highlighted here were set outside of the range over which the constrained extended Kalman Filter (EKF), the estimation technique which is integral to Lagrangian data assimilation as proposed by Kuznetsov and collaborators, was capable of tracking the benchmark solution. It is common knowledge that the EKF can fail to capture the mean history of the drifter/vortex problem under certain circumstances: when a locally linear analysis is inadequate and/or when a Gaussian assumption is unacceptable. Oceanic Lagrangian data assimilation problems are, as a norm, nonlinear and non-gaussian. For the point-vortex/drifter dynamical system considered here Kuznetsov and collaborators, using a distance metric rather than a comparison to some benchmark solution, found that their constrained EKF-based Lagrangian data assimilation scheme would fail, if the uncertainty in the measurements and/or the filtering interval was made sufficiently long. The failures, however, were rare: they attributed the fail- 27

28 ures to the very complex dynamic behavior of the system in the vicinities of saddle bifurcations, or due to numerical issues associated with the closeness of vortices and drifters. We compared the EKF estimates to a benchmark estimate. In doing so it becomes clear that EKF failure happens even when the dynamics do not involve saddle bifurcations. We note, in fact, that the uncertainty norm for the cases where we observed failure of the EKF showed, while nonlinear, were not very complex (the uncertainty norm was comparable among branches of prediction). In the examples calculations considered here we found that the EKF estimates failed at the analysis step when the Gaussianity assumption did not hold. Failures occurred even when the skewness was very small. Even in cases in which they did not see large excursions from the truth (this was use as a measure of the quality of the estimates by Kuznetsov et al), the EKF delivered incorrect mean histories, and even worse estimates of higher moments. In the example calculations the failure of EKF occurs at the analysis step, when the Gaussianity assumption of the method is untenable. Though the ensemble Kalman filter (enkf) does not linearize, it still uses the Gaussian assumption and thus one can infer that enkf would also be challenged by this example problem, provided the uncertainty in the observations is sufficiently high. In the examples that we used for illustration we compared three notions of prediction: the maximal likelihood prediction (defined as the prediction obtained by moving forward deterministically from one filter time to another by taking the most weighty sample as an initial configuration in the initial value problem), the average entropy prediction (defined as the prediction obtained by moving forward the sample whose branch entropy is closest to 28

29 the average entropy over all branches of prediction), and the first moment. We found that in problems with small skewness or multimodality in the posterior, these three predictors agree reasonably well: in all instances of the problem treated here, the entropy was found to be nearly equal among all branches of prediction, implying small skewness or multimodality in the posterior. However, this is not expected to be the general case, particularly in problems which exhibit very complex dynamics: determining the conditions whereby these predictors are the same or radically different would be of great conceptual interest, particularly in defining a notion of prediction, which is so crucial to forecasting. We hope to take up this subject matter in the near future. Acknowledgments This work was supported by NSF DMS The authors wish to thank P.L.S. Dias, A. Stuart and C.R.K. Jones for stimulating discussions and for important discussions relevant to Lagrangian estimation using EKF. We also wish to thank J. Hansen for suggestions that significantly improved the presentation of the results. 29

30 References F. J. Alexander, G. L. Eyink, and J. M. Restrepo. Accelerated Monte-Carlo for optimal estimation of time series. Journal of Statistical Physics, 119: , M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for online nonlinear/non-gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 50: , A. J. Chorin, O. H. Hald, and R. Kupferman. Optimal prediction with memory. Physica D, 166: , D. Crisan, P. del Moral, and T.J. Lyons. Discrete filtering using branching and interacting particle systems. Markov Processes and Related Fields, 5: , G. Evensen. Advanced data assimilation for strongly nonlinear dynamics. Monthly Weather Review, 125: , G. L. Eyink and J. R. Restrepo. Most probable histories for nonlinear dynamics: tracking climate transitions. Journal of Statistical Physics, 101: , G. L. Eyink, J. M. Restrepo, and F. J. Alexander. A statistical-mechanical approach to data assimilation using moment closures, submitted, G. L. Eyink, J. M. Restrepo, and F. J. Alexander. A mean field approximation in data assimilation for nonlinear dynamics. Physica D, 194: ,

31 K. O. Friedrichs. Special Topics in Fluid Dynamics. Gordon and Breach, New York, I. Gilmour, L. A. Smith, and R. Buizza. On the duration of the linear regime: is 24 hours a long time in synoptic weather forecasting? Journal of Atmospheric Science, 58: , N.J. Gordon, D. J. Salmond, and A.F.M. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proceedings-F, 140: , A. Griffa, A. D. Kirwan, A. J. Mariano, T. Ozgokmen, and T. Rossby, editors. Lagrangian Analysis and Prediction of Coastal and Ocean Dynamics. Cambridge University Press, Cambridge, N. Gustafsson. Discussion on 4DVAR or EnKF?. Tellus, 59A: , M. Hairer, A. M. Stuart, and J. Voss. A Bayesian approach to data assimilation. Physica D, 230:50 64, K. Ide, L. Kuznetsov, and C.K.R.T. Jones. Lagrangian data assimilation for point vortex systems. Journal of Turbulence, 3:053, S. Kim, G. L. Eyink, J. M. Restrepo, F. J. Alexander, and G. Johnson. Ensemble filtering for nonlinear dynamics. Monthly Weather Review, 131: , P. Krause. Data assimilation through particle filters for small diffusion kernels within branches of prediction,

32 H. J. Kushner. On the differential equations satisfied by conditional probability densities of Markov processes, with applications. SIAM Journal on Control, Series A, 2: , H. J. Kushner. Dynamical equations for optimal nonlinear filtering. Journal of Differential Equations, 3: , 1967a. H. J. Kushner. Approximation to optimal nonlinear filters. IEEE Transactions on Automatic Control, 12: , 1967b. L. Kuznetsov, K. Ide, and C.K.R.T. Jones. A method for assimilation of Lagrangian data. Monthly Weather Review, 131: , A. S. Lawless, S. Gratton, and N. K. Nichols. Approximate iterative methods for variational data assimilation. International Journal of Numerical Methods in Fluids, 47: , P. J. Van Leeuwen. A variance minimizing filter for large scale applications. Monthly Weather Review, 131: , R. N. Miller, Jr. E. F. Carter, and S. T. Blue. Data assimilation into nonlinear stochastic models. Tellus, 51A: , T. M. Özgökmen, A. Griffa, A. J. Mariano, and L. I. Piterbarg. On the predictability of Lagrangian trajectories in the ocean. Journal on Atmospheric and Oceanic Technology, 17: , E. Pardoux. Équations du filtrage non linéaire de la prédiction et du lissage. Stochastics, 6: , D. T. Pham. Stochastic methods for sequential data assimilation in strongly nonlinear systems. Monthly Weather Review, 129: ,

33 J. M. Restrepo. A path integral method for data assimilation. Physica D, 237:14 27, D. Simon and T. Chia. Kalman filtering with state equality constraints. IEEE Transactions on Aerospace and Electronic Systems, 39: , R. L. Stratonovich. Conditional Markov processes. Theoretical Probability for Applications, 5: , C. Wunsch. The Ocean Circulation Inverse Problem. Cambridge University Press, Cambridge, UK, A Brief Derivation of the Diffusion Kernel The dynamics of equation (4) solves the linear stochastic partial differential equation where t Φ = L(x k)φ, L(x k ) := ((f(x k ) + g(x k ) ζ) ), where ζ := dw/dt, and the subscript k labels the k th filtering time. This equation may be written as t Φ = e(t t k)l Lx k, where e (t t k)l is the associated semigroup. For operators A and B Duhamel s formula is e (t t k)(a+b) = e (t t k)a + t t k e (t s)(a+b) Be (s t k)a ds. 33

34 Using this formula with A := PL and B := L PL, where P is the averaging operator with respect to the Dirac measure about some sample x k,i, one obtains the following nonlinear SODE open problem for the branch of prediction emanating from x k,i : d dt Φ(x k, i, t) = Pf(Φ(x k, t)) + t t k e (t s)l (g(x k,i ) ζ ) Pf(Φ(x k, s)) ds + g(φ(x k, i, t)) ζ, Φ(x k, i, t k ) = x k, i. Writing Φ = φ+φ, where φ solves the deterministic model dx/dt = f(x), and assuming that Cov(Φ ) is small over [t k, t], after closure one obtains the problem d ( ) dt Φ = f(φ) + f(x) t x=φ t k G(x k, i ; t, s) dw(s t k ) + g(φ) ζ, Φ(x k, i, t k ) = x k, i, d dt φ = f(φ), φ(x k, i, t k ) = x k, i, d dt φ = f(x) x=φ φ, φ(x k, i, t k ) = I. G(x k, i ; t, s) is the Diffusion Kernel (see (6)). Equating implies Φ = φ + φ t t k g(φ) dw(s t k ) d ( ) dt Φ = f(φ) + f(x) t x=φ t k G(x k, i ; t, s) dw(s t k ) + G(x k, i ; t, t) ζ. 34

35 List of Figures 1 Schematic representation of branches of prediction. The solution for each branch i = 1,..., I k of prediction is the set of sample paths Φ j (ξ(t k, i), t) = φ(ξ(t k, i), t) + Φ j (ξ(t k, i), t), j = 1,..., J k,i. The solid lines depict the deterministic trajectories φ(ξ(t k, i), t) emanating from the sample states ξ(t k, i); the dashed lines depict sample paths emanating from the sample states ξ(t k, i) Paths in space (X, Y ) of a single drifter (v 1, w 1 ) (circles) and 2 point vortices (p 1, q 1 ) (square) and (p 2, q 2 ) (cross), in the absence of noise. The vortices rotate counter-clockwise on a shared circular path. The vortices started at (1, 0) and ( 1, 0). The drifter starting position was (1, 0.6). The final time was Mean drifter path (v 1, w 1 ), as estimated by DKF and the bootstrap method (dots) and EKF (jagged, dashed). The truth path is superimposed (solid line). The initial position of the drifter is indicated by a large dot Comparison of DKF and bootstrap filter moment estimates for v 1 (t): (b) Second moment, (c) third moment. The dashed lines are the bootstrap estimates, the solid lines the DKF moment estimates

Lagrangian data assimilation for point vortex systems

Lagrangian data assimilation for point vortex systems JOT J OURNAL OF TURBULENCE http://jot.iop.org/ Lagrangian data assimilation for point vortex systems Kayo Ide 1, Leonid Kuznetsov 2 and Christopher KRTJones 2 1 Department of Atmospheric Sciences and Institute

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

Methods of Data Assimilation and Comparisons for Lagrangian Data

Methods of Data Assimilation and Comparisons for Lagrangian Data Methods of Data Assimilation and Comparisons for Lagrangian Data Chris Jones, Warwick and UNC-CH Kayo Ide, UCLA Andrew Stuart, Jochen Voss, Warwick Guillaume Vernieres, UNC-CH Amarjit Budiraja, UNC-CH

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

RESEARCH ARTICLE. Online quantization in nonlinear filtering

RESEARCH ARTICLE. Online quantization in nonlinear filtering Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Particle filters, the optimal proposal and high-dimensional systems

Particle filters, the optimal proposal and high-dimensional systems Particle filters, the optimal proposal and high-dimensional systems Chris Snyder National Center for Atmospheric Research Boulder, Colorado 837, United States chriss@ucar.edu 1 Introduction Particle filters

More information

Modified particle filter methods for assimilating Lagrangian data into a point-vortex model

Modified particle filter methods for assimilating Lagrangian data into a point-vortex model Physica D 237 (2008) 1498 1506 www.elsevier.com/locate/physd Modified particle filter methods for assimilating Lagrangian data into a point-vortex model Elaine T. Spiller a,, Amarit Budhiraa b, Kayo Ide

More information

Sampling the posterior: An approach to non-gaussian data assimilation

Sampling the posterior: An approach to non-gaussian data assimilation Physica D 230 (2007) 50 64 www.elsevier.com/locate/physd Sampling the posterior: An approach to non-gaussian data assimilation A. Apte a, M. Hairer b, A.M. Stuart b,, J. Voss b a Department of Mathematics,

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Filtering the Navier-Stokes Equation

Filtering the Navier-Stokes Equation Filtering the Navier-Stokes Equation Andrew M Stuart1 1 Mathematics Institute and Centre for Scientific Computing University of Warwick Geometric Methods Brown, November 4th 11 Collaboration with C. Brett,

More information

A new unscented Kalman filter with higher order moment-matching

A new unscented Kalman filter with higher order moment-matching A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

Organization. I MCMC discussion. I project talks. I Lecture.

Organization. I MCMC discussion. I project talks. I Lecture. Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems

More information

Theory and Practice of Data Assimilation in Ocean Modeling

Theory and Practice of Data Assimilation in Ocean Modeling Theory and Practice of Data Assimilation in Ocean Modeling Robert N. Miller College of Oceanic and Atmospheric Sciences Oregon State University Oceanography Admin. Bldg. 104 Corvallis, OR 97331-5503 Phone:

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

A Note on Auxiliary Particle Filters

A Note on Auxiliary Particle Filters A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,

More information

Introduction. Chapter 1

Introduction. Chapter 1 Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

An introduction to particle filters

An introduction to particle filters An introduction to particle filters Andreas Svensson Department of Information Technology Uppsala University June 10, 2014 June 10, 2014, 1 / 16 Andreas Svensson - An introduction to particle filters Outline

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

A variational radial basis function approximation for diffusion processes

A variational radial basis function approximation for diffusion processes A variational radial basis function approximation for diffusion processes Michail D. Vrettas, Dan Cornford and Yuan Shen Aston University - Neural Computing Research Group Aston Triangle, Birmingham B4

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

Data assimilation as an optimal control problem and applications to UQ

Data assimilation as an optimal control problem and applications to UQ Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Ensemble Kalman filters, Sequential Importance Resampling and beyond

Ensemble Kalman filters, Sequential Importance Resampling and beyond Ensemble Kalman filters, Sequential Importance Resampling and beyond Peter Jan van Leeuwen Institute for Marine and Atmospheric research Utrecht (IMAU) Utrecht University, P.O.Box 80005, 3508 TA Utrecht,

More information

Expectation Propagation in Dynamical Systems

Expectation Propagation in Dynamical Systems Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Stability of Ensemble Kalman Filters

Stability of Ensemble Kalman Filters Stability of Ensemble Kalman Filters Idrissa S. Amour, Zubeda Mussa, Alexander Bibov, Antti Solonen, John Bardsley, Heikki Haario and Tuomo Kauranne Lappeenranta University of Technology University of

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Ensembles and Particle Filters for Ocean Data Assimilation

Ensembles and Particle Filters for Ocean Data Assimilation DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Ensembles and Particle Filters for Ocean Data Assimilation Robert N. Miller College of Oceanic and Atmospheric Sciences

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Lecture 9. Time series prediction

Lecture 9. Time series prediction Lecture 9 Time series prediction Prediction is about function fitting To predict we need to model There are a bewildering number of models for data we look at some of the major approaches in this lecture

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge

More information

SDE Coefficients. March 4, 2008

SDE Coefficients. March 4, 2008 SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute

More information

Forecasting and data assimilation

Forecasting and data assimilation Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics

More information

Imprecise Filtering for Spacecraft Navigation

Imprecise Filtering for Spacecraft Navigation Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem

More information

Constrained State Estimation Using the Unscented Kalman Filter

Constrained State Estimation Using the Unscented Kalman Filter 16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Efficient Monitoring for Planetary Rovers

Efficient Monitoring for Planetary Rovers International Symposium on Artificial Intelligence and Robotics in Space (isairas), May, 2003 Efficient Monitoring for Planetary Rovers Vandi Verma vandi@ri.cmu.edu Geoff Gordon ggordon@cs.cmu.edu Carnegie

More information

Evolution Strategies Based Particle Filters for Fault Detection

Evolution Strategies Based Particle Filters for Fault Detection Evolution Strategies Based Particle Filters for Fault Detection Katsuji Uosaki, Member, IEEE, and Toshiharu Hatanaka, Member, IEEE Abstract Recent massive increase of the computational power has allowed

More information

Der SPP 1167-PQP und die stochastische Wettervorhersage

Der SPP 1167-PQP und die stochastische Wettervorhersage Der SPP 1167-PQP und die stochastische Wettervorhersage Andreas Hense 9. November 2007 Overview The priority program SPP1167: mission and structure The stochastic weather forecasting Introduction, Probabilities

More information

Local Ensemble Transform Kalman Filter

Local Ensemble Transform Kalman Filter Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO

More information

Ensemble Data Assimilation and Uncertainty Quantification

Ensemble Data Assimilation and Uncertainty Quantification Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics

The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich Arizona State University Co-workers: Istvan Szunyogh, Brian Hunt, Ed Ott, Eugenia Kalnay, Jim Yorke, and many others http://www.weatherchaos.umd.edu

More information

The Scaled Unscented Transformation

The Scaled Unscented Transformation The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

Short tutorial on data assimilation

Short tutorial on data assimilation Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Signal Modeling, Statistical Inference and Data Mining in Astrophysics

Signal Modeling, Statistical Inference and Data Mining in Astrophysics ASTRONOMY 6523 Spring 2013 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Course Approach The philosophy of the course reflects that of the instructor, who takes a dualistic view

More information

Bayesian Calibration of Simulators with Structured Discretization Uncertainty

Bayesian Calibration of Simulators with Structured Discretization Uncertainty Bayesian Calibration of Simulators with Structured Discretization Uncertainty Oksana A. Chkrebtii Department of Statistics, The Ohio State University Joint work with Matthew T. Pratola (Statistics, The

More information

EnKF-based particle filters

EnKF-based particle filters EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Bred Vectors, Singular Vectors, and Lyapunov Vectors in Simple and Complex Models

Bred Vectors, Singular Vectors, and Lyapunov Vectors in Simple and Complex Models Bred Vectors, Singular Vectors, and Lyapunov Vectors in Simple and Complex Models Adrienne Norwood Advisor: Eugenia Kalnay With special thanks to Drs. Kayo Ide, Brian Hunt, Shu-Chih Yang, and Christopher

More information

A State Space Model for Wind Forecast Correction

A State Space Model for Wind Forecast Correction A State Space Model for Wind Forecast Correction Valrie Monbe, Pierre Ailliot 2, and Anne Cuzol 1 1 Lab-STICC, Université Européenne de Bretagne, France (e-mail: valerie.monbet@univ-ubs.fr, anne.cuzol@univ-ubs.fr)

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

I. INTRODUCTION 338 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 33, NO. 1 JANUARY 1997

I. INTRODUCTION 338 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 33, NO. 1 JANUARY 1997 VI. CONCLUSION We have shown that dilution of precision terms for relative positioning, using double-difference processing of GPS satellite signals, are bounded by the corresponding dilution of precision

More information

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Eugenia Kalnay Lecture 2 Alghero, May 2008 Elements of Ensemble Forecasting

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

Signal Processing - Lecture 7

Signal Processing - Lecture 7 1 Introduction Signal Processing - Lecture 7 Fitting a function to a set of data gathered in time sequence can be viewed as signal processing or learning, and is an important topic in information theory.

More information

arxiv: v1 [physics.ao-ph] 23 Jan 2009

arxiv: v1 [physics.ao-ph] 23 Jan 2009 A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter

More information

5.1 2D example 59 Figure 5.1: Parabolic velocity field in a straight two-dimensional pipe. Figure 5.2: Concentration on the input boundary of the pipe. The vertical axis corresponds to r 2 -coordinate,

More information

Stochastic continuity equation and related processes

Stochastic continuity equation and related processes Stochastic continuity equation and related processes Gabriele Bassi c Armando Bazzani a Helmut Mais b Giorgio Turchetti a a Dept. of Physics Univ. of Bologna, INFN Sezione di Bologna, ITALY b DESY, Hamburg,

More information

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Systematic strategies for real time filtering of turbulent signals in complex systems

Systematic strategies for real time filtering of turbulent signals in complex systems Systematic strategies for real time filtering of turbulent signals in complex systems Statistical inversion theory for Gaussian random variables The Kalman Filter for Vector Systems: Reduced Filters and

More information

Introduction to Computational Stochastic Differential Equations

Introduction to Computational Stochastic Differential Equations Introduction to Computational Stochastic Differential Equations Gabriel J. Lord Catherine E. Powell Tony Shardlow Preface Techniques for solving many of the differential equations traditionally used by

More information

Expectation propagation for signal detection in flat-fading channels

Expectation propagation for signal detection in flat-fading channels Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA

More information

Adaptive Observation Strategies for Forecast Error Minimization

Adaptive Observation Strategies for Forecast Error Minimization Adaptive Observation Strategies for Forecast Error Minimization Nicholas Roy 1, Han-Lim Choi 2, Daniel Gombos 3, James Hansen 4, Jonathan How 2, and Sooho Park 1 1 Computer Science and Artificial Intelligence

More information

Ensemble Kalman filter assimilation of transient groundwater flow data via stochastic moment equations

Ensemble Kalman filter assimilation of transient groundwater flow data via stochastic moment equations Ensemble Kalman filter assimilation of transient groundwater flow data via stochastic moment equations Alberto Guadagnini (1,), Marco Panzeri (1), Monica Riva (1,), Shlomo P. Neuman () (1) Department of

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

DATA ASSIMILATION FOR FLOOD FORECASTING

DATA ASSIMILATION FOR FLOOD FORECASTING DATA ASSIMILATION FOR FLOOD FORECASTING Arnold Heemin Delft University of Technology 09/16/14 1 Data assimilation is the incorporation of measurement into a numerical model to improve the model results

More information

Bayesian Inverse problem, Data assimilation and Localization

Bayesian Inverse problem, Data assimilation and Localization Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is

More information