Nonlinear Estimation with Perron-Frobenius Operator and Karhunen-Loève Expansion

Size: px
Start display at page:

Download "Nonlinear Estimation with Perron-Frobenius Operator and Karhunen-Loève Expansion"

Transcription

1 Nonlinear Estimation with Perron-Frobenius Operator and Karhunen-Loève Expansion Parikshit Dutta a, Abhishek Halder b, Raktim Bhattacharya b. a INRIA Rhône Alpes & Laboratoire Jean Kuntzmann, 655 avenue de l Europe, 3833 Montbonnot, France b Department of Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA Abstract hal , version 1-14 Jan 13 In this paper, a novel methodology for state estimation of stochastic dynamical systems is proposed. In this formulation, finite-term Karhunen-Loève (KL) expansion is used to approximate the process noise, thus resulting in a non-autonomous deterministic approximation (with parametric uncertainty) of the original stochastic nonlinear system. It is proved that the solutions of the approximate dynamical system, asymptotically converge to the true solutions, in mean square sense. The evolution of uncertainty for the KL approximated system is predicted via Perron-Frobenius (PF) operator. Furthermore, a nonlinear estimation algorithm, using the proposed uncertainty propagation scheme, is developed. It is found that for finite dimensional linear and nonlinear filters, the evolving posterior densities obtained from the KLPF based estimator is closer, than particle filter, to the true posterior densities. The methodology is then applied to estimate states of a hypersonic reentry vehicle. It is observed that, the KLPF based estimator outperformed particle filter in terms of capturing localization of uncertainty through posterior densities, and reduction of uncertainty. Key words: Karhunen-Loève expansion; Perron-Frobenius operator; stochastic dynamical systems; state estimation. 1 Introduction Consider a stochastic nonlinear system given by the Itô stochastic differential equation (SDE) dx (t) = f (x (t), t, δ) dt + dw (ω, t), dy (t) = h (x (t), t, δ) dt + dv (ω, t), (1) () where x (t) R n and y (t) R m are the state and measurement vectors at time t; δ R p is the parameter vector, and W (ω, t) : Ω R + R n, V (ω, t) : Ω R + R m are mutually independent Wiener processes denoting process and measurement noise, respectively. Here Ω denotes the sample space. The functions f (.), and h (.) represent the dynamics and measurement models, respectively. If the uncertainty in initial conditions (x ) and parameters (δ) are prescribed through the joint probability density function (PDF) ϕ (z), supported over the extended state space z := x δ] T R n+p, then the propagation of This research was partially supported by NSF award #11699 with D. Helen Gill as the Program Manager. Some preliminary results of this work have been accepted as a regular paper in IEEE Multi-Conference in Systems and Control, 1, and is under review at American Control Conference, 13. addresses: parikshit.dutta@inria.fr (Parikshit Dutta), ahalder@tamu.edu (Abhishek Halder), raktim@tamu.edu (Raktim Bhattacharya). Preprint submitted to Automatica 11 January 13

2 uncertainty through (1) and () requires solving the Fokker-Planck equation (FPE) (or Kolmogorov forward equation) 5,37] ϕ n t = (ϕf i ) + x i n n j=1 x i x j (Q ij ϕ) (3) to compute ϕ (z, t) subject to ϕ (z, ) = ϕ (z), where Q is the process noise covariance matrix. In the nonlinear estimation problem, the prior PDF obtained by solving (3), is updated according to Bayes rule to result the posterior PDF. hal , version 1-14 Jan 13 Solving the parabolic partial differential equation (PDE) (3), second order in space and first order in time, is computationally hard 5,39]. Several methods 6,41,19] for approximate numerical solution of the FPE have been developed over the years. These algorithms, intend to solve the FPE using either grid based methods like FEM 6,61,49,1], or by using meshfree methods 4,57]. It is known 55,6] that for grid based methods, complexity in solving the problem increases exponentially with the increase in dimensions of the extended state space. Same is true 17,1,31] for Monte Carlo based methods 36,57] for solving FPE. On the other hand, function approximation techniques like 39] have been reported to work well for moderate (3 to 4) dimensions. However, finding the correct basis to get a sufficiently good approximation of the transient PDFs remains challenging, particularly when the problem is high dimensional 6]. Thus, most solution methods for FPE suffer from the curse of dimensionality 6]. State estimation, in such cases, is a difficult task when systems involved are highly nonlinear and measurement update is infrequent 17]. Hence, there is a pressing need to develop an uncertainty propagation method which will accurately predict uncertainty for high dimensional systems with process noise. State and parameter estimation for nonlinear systems are commonly done using sequential Monte Carlo (SMC) methods, particle filter being the most popular amongst them 3]. It is well known 17] that these methods require a large number of ensembles for convergence, leading to higher computational cost. This problem is tackled through resampling,48,9]. Particle filters with resampling technique, are commonly known as bootstrap filters 9]. It has been observed that bootstrap filters introduce other problems like loss of diversity amongst particles 53], if the resampling is not performed correctly. Recently developed techniques 44] have combined importance sampling and Markov Chain Monte Carlo (MCMC) methods to improve sampling quality, enabling better estimates of states and parameters. Several other methods, like regularized particle filter 51], and filters with MCMC move step 8], have been developed to enhance sample diversity. At the same time, even with resampling, due to the simulation based nature of these filters, the ensemble size scales exponentially with state dimension for large problems 58]. To circumvent this problem, particle filters based on Rao-Blackwellization 11] have been proposed to partially solve the estimation problem analytically. However, its application remains limited to systems where the required partition of the state space is possible. In this work, we have proposed an uncertainty propagation methodology by combining Karhunen-Loève (KL) expansion 35] with Perron-Frobenius (PF) operator theory 4]. Further, we have developed a nonlinear state estimation technique which uses the proposed method for forward propagation of uncertainty. Notice that, in the absence of process noise, (3) reduces to Liouville equation associated with the PF operator 4] that describes spatio-temporal drift of ϕ (z, t), accounting parametric and initial condition uncertainty. Being a first order quasi-linear PDE, Liouville equation can be solved 3] using the method of characteristics (MOC) 3] and unlike Monte Carlo, such a computation occurs in pointwise exact arithmetic 31,1]. In nonlinear estimation setting, it has already been shown 1,18] that for systems without process noise, PF operator based nonlinear filter outperforms particle filter and bootstrap filter. Hence, it is reasonable to hope that the proposed estimation technique based on KL expansion and PF operator (KLPF), will perform better than other SMC methods. The motivation behind investigating such mixed parametric-nonparametric approach stems from the fact that the presence of process noise necessitates solving a second order transport PDE, and consequently, MOC based exact arithmetic computation can not be achieved. This argument usually leads researchers to develop approximation algorithms to solve the second order FPE, which is an exact description of the problem. Here, instead, we derive an approximate ordinary differential equation (ODE) representation of the problem (using KL), and then solve this approximate problem in exact arithmetic (using PF). Hence our approach differs from those which strive to numerically solve the Fokker-Planck PDE, in the sense that we propose to approximate the problem while the latter strives to approximate the solution. Convincing numerical evidence in favor of this argument was given in our earlier work ]. Through a multivariate Kolmogorov-Smirnov test, that paper also showed that the KLPF solution satisfies at least weak distributional convergence to the true SDE solution, meaning our first KL, then PF algorithm is

3 asymptotically consistent in distribution. However, two issues remained unsettled there: (i) in exactly what sense the convergence of the solution occurs, and (ii) formal proof in support of this consistency result. This paper addresses both these issues. Further, a detailed numerical investigation is carried out about the performance of the KLPF filter. The rest of the paper is organized as follows. In Section II, KL expansion is introduced first, and used to approximate the process noise term. It is also shown why such an approximation yields a sufficiently good solution for the SDE. The KLPF methodology is presented next in Section III with some illustrative examples. Extension of this idea to other types of stochastic evolution equations is pointed out. Then the nonlinear estimation technique, based on KLPF, is proposed in Section IV. In Section V, we demonstrate that for finite dimensional linear and nonlinear filters with known solutions, KLPF filter outperforms particle filter. Then we apply this technique to estimate states of a six dimensional nonlinear system and compare its results with particle filter. Section VI concludes the paper. Karhunen-Loève Approximation of Stochastic Dynamical Systems KL expansion was derived independently by many researchers 38,35,34,45,47] to represent a stochastic process Y (ω, t) as a random linear combination of a set of orthonormal deterministic L functions {e i (t)}, i.e. Y (ω, t) = Z i (ω) e i (t), (4) hal , version 1-14 Jan 13 where Z i (ω) are random variables. The idea is similar to the Fourier series expansion, where a deterministic linear combination of orthonormal L functions is used. Further, if we write Z i (ω) = Λ i ζ i (ω), where Λ i R + and {ζ i (ω)} is a sequence of random variables to be determined, then {Λ i } and {e i (t)} can be interpreted as the eigenvalues and eigenfunctions of the covariance function 14] C (t 1, t ) := cov (Y (ω, t 1 ) E Y (ω, t 1 )], Y (ω, t ) E Y (ω, t )]), (5) that admits a spectral decomposition 3] of the form C (t 1, t ) = Λ i e i (t 1 ) e i (t ). Since the covariance function is bounded, symmetric and positive-definite, the eigenvalue problem can be cast as a homogeneous Fredholm integral equation of second kind, given by C (t 1, t ) e i (t 1 ) dt 1 = Λ i e i (t ). (6) D t Given the covariance function of a stochastic process, the eigenvalue-eigenfunction set can be found by solving (6), and the resulting expansion (4) converges to Y (ω, t) in mean-square sense 43]. The following results are useful for our formulation. Theorem 1 (KL expansion of Wiener process) For Wiener process W (ω, t) with variance σ, the eigenvalues and eigenfunctions {Λ i, e i (t)}, of its covariance function C (t 1, t ) = (t 1 t ), t 1, t, T ], T ], are given by Λ i = 4T (( π (i 1), e i (t) = T sin i 1 ) ) πt, (7) T and hence the KL expansion of W (ω, t), is of the form (( W (ω, t) m.s. sin i 1 ) ) πt T = ζ i (ω), (8) T ( i 1 ) π T where ζ i (ω) are i.i.d. samples drawn from N (, σ ). 3

4 Corollary (KL expansion of Gaussian white noise) Since dw (ω, t) = η (ω, t) dt, the KL expansion for Gaussian white noise η (ω, t) can be obtained by taking the derivative of (8) with respect to t, i.e. η (ω, t) m.s. = T (( ζ i (ω) cos i 1 ) ) πt, (9) T where the i.i.d. random variables ζ i (ω) N (, σ ). Proposition 1 (KL expansion applied to SDE) Consider the stochastic dynamical system (1). We will incorporate the sample of the Gaussian white noise ω Ω, in the argument of dw (ω, t). Important point to note is, Ω is independent of the sample space of X(t). Without loss of generality, let us consider only state (instead of extended state) uncertainty in the system. Therefore, (1) can be written in Langevin form ẋ (t) = f (x (t), t) + η (ω, t). (1) hal , version 1-14 Jan 13 We consider integrating up to a finite time T ; hence, η(ω, t) : Ω, T ] R n is a Gaussian white noise having autocorrelation Q. Utilizing corollary, where the expansion in (9) has been truncated to N-terms, (1) can be written as ẋ N (t) = f (x N (t), t) + T where x N (t) is the solution of the KL-approximated system..1 Convergence of Solution N (( ζ i (ω) cos i 1 ) ) πt T Due to finite term truncation of the white noise KL expansion, the solution of (11) is only an approximation of the SDE solution. Thus, it is imperative to quantify a notion of convergence, if any, between the solution of (11) and that of (1). In this section, we show that as N, x N (t) converges to x(t) in mean square sense, and then discuss about the rate of convergence..1.1 Asymptotic Convergence Theorem 3 Let x (ω, t) be the solution of the nonlinear Itô SDE where dw(ω, t) = η (ω, t) dt, and f : R n, T ] R n satisfies the following: (11) dx(t) = f(x, t)dt + dw(ω, t) (1a) dx (t) = f (x, t) + η(ω, t), dt (1b) (1) non-explosion condition: D, s.t. f (x, t) < D(1 + x ) where x R n, t, T ]; () Lipschitz condition: C, s.t. f (x, t) f ( x, t) < C x x, where x, x R n, t, T ]. Let x N (t) be solution of the ordinary differential equation (ODE), formed by N-term approximation of η(ω, t) in (1a), using orthonormal basis, which is given by, dx N (t) = f (x N, t) + η N (ω, t), dt ] T where η N (ω, t) is the N-term approximation and E η N (ω, t)dt <. Then, lim E x (t) x N (t) ] =, N (13) (14) iff x N (t) is the KL expansion of x (t). 4

5 Proof Throughout the proof, we will write x(t) as x(ω, t) where ω Ω, the sample space of the random process x(t). First we will prove that if (14) holds then x N (ω, t) is the KL expansion of x (ω, t). Let {φ m (t)} m=1 be any orthonormal basis. Then x (ω, t) can be written as a convergent series sum x (ω, t) = b m c m (ω)φ m (t). m=1 (15) Let the solution of (13) be a N term approximation of x (ω, t) which converge to true solution in mean square sense. Hence the truncation error e N is given by e N (ω, t) = b m c m (ω)φ m (t). m=n+1 (16) Projecting x (ω, t) in φ m results c m (ω) = 1 T x(ω, t)φ m (t)dt. b m (17) Hence hal , version 1-14 Jan 13 E e N (ω, t)] = = m=n+1 k=n+1 m=n+1 k=n+1 T T φ m (t)φ k (t) E x (ω, t 1 ) x (ω, t )] φ m (t 1 )φ k (t )dt 1 dt T T φ m (t)φ k (t) C xx (t 1, t )φ m (t 1 )φ k (t )dt 1 dt, (18) where C xx (t 1, t ) is the covariance function of x(ω, t). For convergence of error, the orthonormal basis φ m (t) should minimize T T E e N (ω, t)] dt. Taking the orthonormality of φ m (t) into account, we get E e N (ω, t)] dt = m=n+1 T T which is to be minimized over φ m (t), subject to the constraint T φ m (t)φ k (t)dt = δ mk m, k N. C xx (t 1, t )φ m (t 1 )φ m (t )dt 1 dt, (19) () Introducing Lagrange multipliers b m, our objective function J(.) becomes J(φ m (t)) = min φ m(t) m=n+1 b m T T C xx (t 1, t )φ m (t 1 )φ m (t )dt 1 dt ( ) T φ m (t)φ m (t)dt 1. (1) Differentiating (1) with respect to φ m (t) and setting the derivative to zero, we obtain T ] T C xx (t 1, t )φ m (t 1 )dt 1 b mφ m (t ) dt =, () T C xx (t 1, t )φ m (t 1 )dt 1 = b mφ m (t ), (3) 5

6 which is the Fredholm integral equation for the covariance function of random process x (ω, t). Hence φ m (t) and b m are the eigenfunction-eigenvalue pairs of C xx (t 1, t ). This completes our proof. Now we will prove that, if x N (ω, t) is the KL expansion of x(ω, t), then to ensure that solution of (13) converge to solution of (1a) in mean square sense, x N (ω, t) must be the solution of (13). To prove this, we recall the following uniqueness conditions on solution of (1a) and KL expansion of a random process. Proposition (5], Chap. 5) Given the non-explosion condition and the Lipschitz condition are satisfied for f (, ) in (1a). Let Z be a random variable, independent of the σ-algebra F, m generated by η(ω, t), t and E Z ]. Then the stochastic differential equation in (1a) where t, T ], X(ω, ) = Z, has a unique t- ] T continuous solution x(ω, t) adapted to the filtration Ft Z generated by Z, and E x(ω, t) dt <. Proposition 3 (7], Chap. ) The KL expansion of the random process x(ω, t), given by x(ω, t) = Λm ξ m (ω)φ m (t), is unique. m=1 hal , version 1-14 Jan 13 Let us assume that x N (ω, t) is the KL expansion of x(ω, t). Furthermore, assume that x N (ω, t) x N (ω, t), which is the solution of (13). Hence, we assume that although x N (ω, t) is the KL expansion of x(ω, t), it does not satisfy (13), whose solution converge to the solution of (1a) in mean square sense. Proposition and 3 tell us that the solution of SDE is unique and any random process has an unique KL expansion. Also (13) has unique solution as RHS of (13) satisfies Lipschitz condition. This can be proven as follows: for right hand side of (13) to satisfy Lipschitz condition, we must have, f(x, t) + η N (ω, t) f( x, t) η N (ω, t) C x x f(x, t) f( x, t) C x x, which is true since f(, ) itself satisfies Lipschitz condition. Hence (1a) has unique solution that admits a unique KL expansion. Also according to our assumption, the solution of (13) converge to solution of (1a) in mean square sense. This contradicts our assumption that x N (ω, t) x N (ω, t), which completes the proof. Theorem 3 states conditions upon the solutions of approximated and true systems for mean square convergence to hold, under certain assumptions on f(, ). However, no condition is imposed on the initial states, which we investigate next. Theorem 4 Given the stochastic dynamical system dx(t) = f(x, t)dt + dw (ω, t), (4) and its corresponding N-term KL approximation given by N dx N (t) = f(x N, t)dt + ξ i (ω) φ i (t)dt (5) where, lim E N N W t ξ i (ω)φ i (t) =. Then, lim E x(t) x N (t) ] =, if x() = x N (). N 6

7 hal , version 1-14 Jan 13 Proof Integrating (4) and (5) and taking the expected value of square of the difference, E x(t) x N (t) ] t t =E N (x() x N ()) + (f(x, s) f(x N, s))ds + d(w s ξ i (ω)φ i (s)) E (x() x N ()) ] t ] + E (f(x, s) f(x N, s))ds + t E N d(w s ξ i (ω)φ i (s)) E (x() x N ()) ] t ] + te f(x, s) f(x N, s) ds + t E N d(w s ξ i (ω)φ i (s)) (Using Chebyshev s integral inequality.) lim E x(t) x N (t) ] E (x() x N ()) ] t ] + lim te f(x, s) f(x N, s) ds + N N t lim E N d(w s ξ i (ω)φ i (s)) (6) N Using Lipschitz criterion and property of KL expansion, we get v(t) B + A v(t) (say) lim E x(t) x N (t) ] E (x() x N ()) ] N }{{}}{{} =:B (say) t t +tc lim E N x(s) x N (s) ] ds v(s)ds v(t) B exp(at) (Gronwall s inequality, where tc A, t (, T ]) lim E x(t) x N (t) ] =, since x() = x N (), as per our assumption. N Theorem 4 implies that, starting from the same initial condition, for sufficiently large N, solution of (5) converge to solution of (4) in mean square sense. Hence, it can be said, that if the process noise term in Langevin equation is well-approximated using KL expansion, it entails a good approximation in solution space. Notice that in presence of initial state uncertainty, Theorem 4, may not always be true. The following corollary quantifies when it is indeed true, for uncertain initial conditions. Corollary 5 If x N () x (), and if x N () is the generalized polynomial chaos (gpc) expansion of x(), then theorem 4 is true. Proof Starting from (6) in the proof of theorem 4, and taking limits on both sides, lim E x(t) x N (t) ] lim E (x() x N ()) ] t ] + lim te f(x, s) f(x N, s) ds + N N N t lim E N d(w s ξ i (ω)φ i (s)) (7) N Going through the subsequent steps as before, we arrive at the following final result, lim E x(t) x N (t) ] =, if lim E x() x N () ] = N N 7

8 If x N () is the gpc expansion of x(), then they converge in mean square sense 7]. x() x N () ] =, which implies that lim N E Hence lim N E inequality. This completes our proof. 4 x(t) x N (t) ] =, by using Gronwall s 3 1 x 1 3 hal , version 1-14 Jan Fig. 1. This plot illustrates the asymptotic convergence results developed in Section II.A.1 for Van der Pol oscillator given by (3). Starting from the same initial condition (1, 1), denoted by the filled circle, the dashed, and solid curves show the deterministic (zero noise), and stochastic (SDE sample path with zero-mean additive Gaussian noise of variance.5) trajectories, respectively. The dash-dotted curve is the trajectory of the KL-approximated system with N = 1 terms, starting from the same initial condition, with process noise same as that of the SDE path. As N increases, the dash-dotted curve converges to the solid curve in mean-square sense..1. Rate of Convergence It is known 5] that the rate-of-convergence of KL expansion is governed by the decay rate of the sum of the eigenvalues of the covariance function. Example 1 (Rate-of-convergence of KL expansion of Wiener process) Consider the stochastic process dx (ω, t) = dw (ω, t). Eigenvalues of the covariance function of x (ω, t) are given by Theorem 1. Thus, the truncation error due to N-term KL approximation of x (ω, t), is x 1 i=n+1 N Λ i = Λ i Λ i = 4T π π 8 1 ( (π ψ (1) N + 1 ))] = Tπ ( 8 ψ(1) N + 1 ), (8) which decays faster than O ( N 1), but slower than O ( N ). Here ψ (1) (.) denotes the trigamma function ]. Recently, it has been shown 56] that one can estimate the KL convergence rate by knowing the smoothness of the covariance function, even though the eigenvalues are not available analytically. The main result is that exponential convergence occurs when the covariance function is analytic. If the latter has Sobolev regularity, the KL expansion has algebraic decay. The following example on geometric Browinan motion (GBM) illustrates this. Example (Rate-of-convergence of KL expansion of GBM) Consider the GBM dx (ω, t) = ax (ω, t) dt + bx (ω, t) dw (ω, t), where a, b are deterministic ) constants. The covariance function can be computed from (5) as C (t 1, t ) = (x (ω)) e (e a(t1+t) b t 1. Since C (t 1, t ) is analytic, the convergence is exponential. 8

9 In the nonlinear estimation setting, the covariance function of the process being estimated, is not known beforehand. What is known here is the dynamics f (.). This motivates us to investigate the connection between the regularity of f (.) with the regularity of C (t 1, t ). However, such an analysis has not been done here and will be a topic of our future work. 3 KLPF Formulation In this section, we propose a methodology to predict evolution of uncertainty in KL-approximated system using the PF operator. Let us consider the KL approximated dynamical system given in (11) where x N (t) R n are the states, having initial PDF ϕ (x N (), ) = ϕ. We apply MOC to PF operator, using the formulation given in 3], and form an augmented dynamical system given by ẋ N (t) = f (x N (t)) + T N ϕ(x N (t)) = div (f (x N (t))) ϕ(x N (t)), (( ζ i (ω) cos i 1 ) ) πt, (9a) T (9b) hal , version 1-14 Jan 13 n f (x N (t)) where div (f (x N (t))) := x Ni (t), and x N 1, x N,..., x Nn ] are states of the KL approximated dynamical system. Equation (9) can be solved to get the value of ϕ (x N (t)) along the characteristic curves of PF operator equation. Detailed discussion of the solution methodology is omitted here for brevity, and the reader is referred to 1,1] for the same. 3.1 Illustrative Examples The proposed methodology is applied to a Vanderpol s oscillator, given by ẍ (t) = ( 1 x (t) ) ẋ (t) x (t) + η (ω, t), (3) and to a Duffing oscillator, with dynamics, ẍ (t) = 1x (t) 3x 3 (t) 1ẋ (t) + η (ω, t), (31) with η (ω, t) having autocorrelation πi. The initial state uncertainty, has a PDF ϕ N (, ] T, diag (1, 1) ) for both the systems in (3) and (31). Taking x 1 (t) = x (t) and x (t) = ẋ (t), the augmented dynamical system for the Vanderpol s oscillator, is given by ẋ 1 (t) = x (t), ẋ (t) = ( 1 x 1 (t) ) x (t) x 1 (t) + η (ω, t), ϕ (x (t)) = ( 1 x 1 (t) ) ϕ (x (t)), (3a) (3b) (3c) and for the Duffing oscillator, is given by ẋ 1 (t) = x (t), ẋ (t) = 1x 1 (t) 3x 3 1 (t) 1x (t) + η (ω, t), ϕ (x (t)) = 1ϕ (x (t)). (33a) (33b) (33c) Next, the initial PDF ϕ is sampled with sample size of ν = 5. For the Vanderpol s oscillator, final time T is taken to be 1s, for the Duffing oscillator T = 3s. Number of terms in the KL expansion is kept fixed as N = 7. Figure shows the evolution of probability densities with time for the two oscillators. It is observed that, for the Vanderpol s oscillator the probability mass accumulates along the limit cycle and for the Duffing oscillator, we get a bimodal PDF at final time. This is in agreement with the physical intuition for behavior of these systems. 9

10 hal , version 1-14 Jan 13 Fig.. Uncertainty Propagation for Vanderpol s oscillator (3) in top row, and Duffing oscillator (31) in bottom row. Darker regions have higher PDF value than lighter regions. 3. Extensions It is important to recognize that the main idea behind KLPF is to derive an approximate ODE initial value problem (IVP) from the original stochastic description, thereby facilitating an MOC based PF operator computation in exact arithmetic. Thus, it is possible to extend our KLPF formulation for more general class of stochastic systems, not necessarily described by an SDE IVP, as long as we can arrive at some ODE IVP approximation. In the SDE IVP case, this has been achieved through a two-step first KL, then PF algorithm (Figure 3). For general stochastic evolution equations, an intermediate approximation step is needed between the KL approximation and MOC based PF computation step. Fig. 3. Schematic of the generalized KLPF formulation. As the right-most flow-chart shows, an intermediate approximation step is needed for generic stochastic evolution, between KL approximation and PF operator based computation. 1

11 3..1 Example: Uncertainty quantification for stochastic delay differential equation (SDDE) Consider a dynamics described by nonlinear SDDE 46] dx (t) = f (t, x (t), x (t τ)) dt + dw (ω, t), t >, x (t) = ψ (t), t ( τ, ], (34) where τ denotes a fixed time-delay, ψ (t) is the history function, and W (ω, t) denotes the Wiener process. The drift vector field depends on both the current state x (t), and the delayed state x (t τ). Given (34), solving the delay Fokker-Planck equation 4] is computationally hard for general nonlinear drift. Instead, we can write the KL approximated dynamics corresponding to (34) as dx N = f (t, x N (t), x N (t τ)) + η N (ω, t), t >, dt x N (t) = ψ (t), t ( τ, ], (35) hal , version 1-14 Jan 13 which is a DDE boundary value problem (BVP), with η N (ω, t) being the N-term KL approximation of Gaussian white noise. We next use method-of-steps 1 (MOS) 5] to further reduce this infinite-dimensional BVP to a sequence of finite-dimensional ODE IVPs, as shown: ( ) ẋ (1) N (t) = f t, x (1) N (t), ψ (t τ) + η N (ω, t), t (, τ], ( ) ẋ () N (t) = f t, x () N (t), x(1) N (t τ) + η N (ω, t), t (τ, τ],. ( ) ẋ (m) N (t) = f t, x (m) N (t), x(m 1) N (t τ) + η N (ω, t), t ((m 1) τ, mτ], (36) where m Z +. Now we can apply the MOC based PF formulation to (36) by solving the following pair of ODEs in the j th interval ((j 1) τ, jτ]: ( ) ẋ (j) N (t) = f t, x (j) N (t), x(j 1) N (t τ) + η N (ω, t), (37a) ( ( )) ϕ (j) = div f t, x (j) N (t), x(j 1) N (t τ) ϕ (j), (37b) with the initial conditions ( being the) respective terminal values of the (j 1) th interval. Eqn. (37) computes the evolution of joint PDF ϕ (j) t, x (j) N (t) along the trajectory x (j) N (t). 4 Nonlinear Filtering Using KLPF Method Let us consider a nonlinear dynamical system with states x R n being measured by a discrete nonlinear measurement model with outputs ỹ k R m, modeled as ẋ(t) = f(x, t) + η(ω, t), ỹ k = h(x k, t k ) + ϑ k, (38a) (38b) where {t k } r k= are time instances when measurements are obtained, and {x k} r k= are the corresponding states. η(ω, t) and ϑ k are zero mean delta correlated Gaussian random processes with covariances Q and R, respectively. Let p k (.) and p+ k (.) denote the prior and posterior PDFs at time t k. The state estimation algorithm described next, is developed in similar manner as 1]. 1 MOS provides the intermediate approximation step from DDE BVP to ODE IVP, as shown in the third column of Figure 3. 11

12 4.1 Step 1: Initialization Step Sample ν particles from the domain of the initial random variable. Let the particles be {x,i } ν with corresponding PDF values {p,i } ν. The following steps are then performed sequentially starting from k = Step : Propagation Step Consider the N-term KL approximation of (38a) and the corresponding MOC based PF operator equation, given together by (9). With the initial states at (k 1) th step as x k 1,i, p k 1,i ], (9) is integrated over the interval t k 1, t k ] for each particle, to get the states x k,i and prior PDF p k,i at t k. 4.3 Step 3: Update Step First, the likelihood function is determined for each particle using Gaussian nature of measurement noise, which is given by 1 p k,i (ỹ k x k = x k,i ) = (π)m R exp 1 (ỹk h(x k,i, t k )) R 1 (ỹ k h(x k,i, t k )) ], where R denotes ther determinant of measurement noise covariance matrix. The posterior PDF value for the i th particle, is then obtained as hal , version 1-14 Jan 13 p + k,i = p k,i(ỹ k x k = x k,i )p k,i. ν p k,i (ỹ x k = x k,i )p k,i The actual posterior PDF is the limit of the above expression as ν. However, unlike particle filters, here we compute the prior density in exact arithmetic using PF operator. Hence, accounting the dynamics of the prior itself, makes (39) an accurate representation of the posterior PDF, as no histogram approximation is employed here. A comparative analysis of numerical performance for obtaining PDFs using MC histogram versus PF operator, can be found in 31]. 4.4 Step 4: Getting the State Estimate Computation of the state estimate for the k th step, depends on the desired optimality criterion involving posterior PDF. Most commonly used criteria are maximum likelihood criterion (MLC), minimum error criterion (MEC) and minimum variance criterion (MVC) 9]. For the examples, following this section, we have used MVC to obtain the ν ν state estimate, which is given by ˆx k = arg min x x x k,i p + k (x k,i) = x k,i p + k (x k,i). (39) One can use resampling techniques, often employed in particle filtering to avoid degeneracy and increase diversity amongst particles 53]. However, qualitatively, since histogram techniques are not used in determining density functions, this method is less sensitive to the issue of degeneracy. 5 Numerical Results 5.1 Comparison of Posterior Densities of Particle Filter and KLPF Estimator We compare the KLPF estimator to particle filters, by investigating the posterior PDFs obtained by the two methods after each filtering step. Our objective is to illustrate the performance improvement achieved by KLPF compared to particle filter, for cases where the estimation problems are known to admit finite dimensional sufficient statistics. In such cases, knowledge of the true posterior enables us to conclude the performance improvement accomplished by KLPF. 1

13 5.1.1 Case I: Kalman Filter For a linear system with Gaussian uncertainty, we know the exact evolution of PDFs through Kalman filter. We compute the Wasserstein metric 54] of the posteriors obtained by aforementioned methods, from the Kalman filter s posterior. The Wasserstein metric is a metric comparing the distance between the shapes of two distributions, as defined next. Definition 1 (Wasserstein distance) Let the pair (M, d) denote a metric space M endowed with metric d. For q 1, let P q (M) denote the collection of all probability measures µ on M with finite q th moment, i.e. for some x in M, M d(x, x) q dµ(x) <. Then the Wasserstein distance of order q, between two probability measures ς 1 and ς in P q (M), is defined as ( W q (ς 1, ς ) := inf µ M(ς 1,ς ) M M d(x, x) q dµ(x, x)) 1/q, (4) where, M(ς 1, ς ) is the set of all measures supported on the product space M M, such that their first marginal is ς 1 and second marginal is ς. hal , version 1-14 Jan 13 For the sake of brevity, detailed discussion about the Wasserstein metric has been omitted here and can be found in ref. 16,59]. If ς 1 N (µ 1, Σ 1 ) and ς N (µ, Σ ), and if we consider q = (requiring the nd moment to be finite), then the Wasserstein metric admits a closed form expression in terms of the moments of ς 1 and ς : W (ς 1, ς ) = µ 1 µ + tr(σ 1 ) + tr(σ ) tr( Σ 1 Σ Σ1 ) 1. (41) In general, the Wasserstein metric can be normalized by dividing W (ς 1, ς ) by the diameter of the product space. As a concrete numerical example, let us consider the linear system given by the following dynamical equations. ẋ(t) =.5I x(t) + 1 1] w(t), (4) where x(t) R are the states and I n R n n is the identity matrix. We consider a discrete measurement model given by ỹ k = 1 1] x k + v k, (43) where ỹ k are measurements coming at discrete times. w(t) and v k are assumed to be zero mean delta correlated Gaussian processes with covariance Q = 1/ 3 and R = 1/ respectively. We assume that the system has Gaussian initial condition uncertainty with mean and covariance as 1, 1] and diag(1, 1) respectively. We draw 1 samples, each of sample size 5 from the initial distribution, and plot the mean and standard deviation of the Wasserstein distances, as measured from the Kalman filter posterior, shown in Figure 4. It can be observed that the mean for the KLPF estimator is closer to zero than the particle filter. Moreover, the deviation from the mean is lower for the KLPF estimator than the particle filter. Hence it can be said that KLPF estimator is able to capture the evolution of densities more accurately than the particle filter Case II: Bene s Filter Bene s filter is one of the few known nonlinear filters which admit a finite-dimensional solution of the nonlinear estimation problem. Here, the nonlinear drift is assumed to satisfy a Riccati differential equation 7] and the measurement model is taken to be affine in states. For simplicity, we consider the continuous-time scalar Bene s filtering problem of the form: dx (t) = κex e x κe x dt + dw (ω, t), dy (t) = x (t) dt + dv (ω, t), (44) + e x 13

14 .5.45 Mean and Std. Dev. for W Dist time (s) Fig. 4. Plot of mean and standard deviation of the Wasserstein distances of the KLPF estimator (solid line) and the particle filter (hyphenated line) from the Kalman filter. The vertical lines about the means represent ±1σ limits. hal , version 1-14 Jan 13 with κ =.5 and deterministic initial condition x. The process and measurement noise densities are N (, Q) and N (, R) respectively, with Q = 1, R = 1. It can be shown 4] that the drift nonlinearity in (44) satisfy the necessary Riccati condition and the resulting solution 15] is given by the normalized posterior density ( coth(t) ϕ (x (t) Y t ) = π κe x + e x κe It(y(ω)) + e It(y(ω)) where Y t is the history (filtration) till time t, and ) exp ( 1 ) Γ (t), (45) I t (y (ω)) := sech(t) x + t ] sinh(s)dy s (ω), Γ (t) := tanh(t) + coth(t) (x I t (y (ω))). (46) Notice that for this nonlinear non-gaussian estimation problem, unlike Kalman filter case, we can not write the Wasserstein distance between the true posterior (45) and particle filter/klpf posterior, as an analytical expression in terms of the respective sufficient statistics. Thus, in order to compute the Wasserstein time history, we resort to the optimization problem (4), which can be cast 33] as a linear program (LP). At each time, we sample (45) using the Metropolis-Hastings MCMC technique 13], and solve the LP between the sampled true Bene s posterior and particle filter/klpf posterior, to result the normalized Wasserstein trajectories shown in Figure 5. Like the Kalman filter case, as time progresses, KLPF posterior gets closer, compared to particle filter, to true Bene s posterior. 14

15 .6.4 Mean and Std. Dev. for W Dist time (s) Fig. 5. Plot of mean and standard deviation of the Wasserstein distances of the KLPF estimator (solid line) and the particle filter (hyphenated line) for the Bene s filter. The vertical lines about the means represent ±1σ limits. 5. Application to Hypersonic Entry hal , version 1-14 Jan 13 The KLPF filtering technique is applied next to estimate states of a hypersonic spacecraft entering the atmosphere of Mars. The entry dynamics is given by the noisy version of Vinh s equation 1]: ṙ =v sin γ + η r, v cos γ cos ξ θ = + η θ, r cos λ v cos γ sin ξ λ = + η λ r v = ρv g sin γ Ω r cos λ(sin γ cos λ cos γ sin λ sin ξ) + η v B c v γ = r g cos(γ) + ρ L v D ξ = ρ B c (47a) (47b) (47c) (47d) «v cos σ + Ω cos λ cos ξ + Ω r cos λ(cos γ cos λ + sin γ sin λ sin ξ) + ηγ (47e) v B c «L v sin σ v cos γ cos ξ tan λ + Ω(tan γ cos λ sin ξ sin λ) Ω D r r v cos γ sin λ cos λ cos ξ + η ξ, where r is the distance of the vehicle from the planet s center, θ and λ are Mars-centric longitude and latitude respectively, v is the total velocity of the vehicle, γ is the flight path angle and ξ is the azimuth angle. The symbol g denotes the acceleration due to gravity, B c denotes the ballistic coefficient, L D stands for the lift-to-drag ratio of the the vehicle, ρ is the atmospheric density given by ρ = ρ e ( h h h ) 1, where ρ, h 1 and h are constants depending on the planet s atmospheric model, h = r R m is the height measured from the planet s surface and R m is the mean equatorial radius of the planet. η r, η θ, η λ, η v, η γ and η ξ are zero mean Gaussian process noises. Description of the constants in (47a) through (47f), used to simulate reentry in Martian atmosphere, are given in Table 1 1]. The system is assumed to have initial Gaussian state uncertainty with mean vector and covariance matrix given by, µ = R m + 54 Km, o, 3 o,.4 Km/s, 9 o,.573 o ] and Σ = diag (5.4 Km, 3 o, 3 o, 4 m/s,.9 o,.57 o ). (47f) The discrete measurement model consists of six measurements, and is given by ỹ = q, H, γ, θ, λ, ξ] + ϑ (48) where q = 1/ρv is the dynamic pressure and H = Kρ 1 v 3.15 is the heating rate, with K = being the scaled material heating coefficient 8]. ϑ R 6 is a vector of zero mean delta correlated discrete measurement noise. 15

16 Table 1 Parameter values for (47) Description of Constants Radius of Mars Value R m = m Acceleration due to gravity of Mars g = 3.71 m/s Ballistic coefficient of the vehicle B c = 7.8 kg/m L Lift-to-drag ratio of the vehicle =.3 D Density at the surface of mars ρ =.19 kg/m 3 Scale height for density computation Escape velocity of Mars Angular velocity of Mars Bank angle h 1 = 9.8 km h = km v c = 5.7 km/s Ω = rad/s σ = o Equation (47) and (48) are non-dimensionalized before applying the filtering algorithms. The characteristic mass, length and time, for this non-dimensionalization, are mass of the reentry vehicle (8 kg), R m and R m /v c = s, respectively. In scaled units the process noises are assumed to have covariance of Q = , and the measurement noise has covariance of R = I 6. hal , version 1-14 Jan 13 sqrt(σ θ CRLBθ ) (in deg) sqrt(σ r CRLBr ) (in km) sqrt(σ v CRLBv ) (in km/s) 5 1 time (s) sqrt(σ λ CRLBλ ) (in deg) time (s) sqrt(σ γ CRLBγ ) (in deg) 5 1 time (s) sqrt(σ ξ CRLBξ ) (in deg) 5 1 time (s) time (s) time (s) Fig. 6. Plots for σ CRLB for all the six states. The solid line represents KLPF filter (3 particles). The hyphenated, hyphen-dotted and solid-asterixed lines represent particle filters with 3, and 5 particles respectively. Figure 6 shows the plots for square root of the difference between variance and Cramer-Rao lower bounds (CRLB) for each state. The KLPF estimator with sample size 3 is compared with particle filters with 3, and 5 particles. We can see that the difference for KLPF estimator is lower than that of the particle filters for all the states. Hence, for the present case, the solution obtained from proposed estimation scheme is closer to the true minimum variance solution than that obtained from particle filters. Next, we plot the final posterior univariate and bivariate marginals obtained from the state estimation algorithms. The marginal PDFs were calculated using the algorithm given in ref. 3]. Figure 7 shows comparison of univariate marginals obtained from KLPF estimator and particle filter, both with 3 particles. Figure 8 plots bivariate marginal PDFs obtained from KLPF estimator and particle filter. It can be observed that the KLPF estimator is 16

17 r (km) λ (degrees N) θ (degrees E) v (km/s) γ (degrees) 3 1 ξ (degrees) hal , version 1-14 Jan 13 Fig. 7. Final posterior unidimensional marginal PDF for all states obtained from KLPF estimator (solid line) and particle filter (hyphenated line) with 3 particles. able to reduce variance and capture localization of uncertainty better than the particle filter with same number of particles. The KLPF estimator performs better than the particle filter with larger number of particles. Figure 9 and Fig. 1 show plots for the final posterior univariate and bivariate PDFs respectively for KLPF estimator with 3 particles and particle filter with 5 particles. It can be seen that, the KLPF estimator reduces variance and captures uncertainty localization better than the particle filter, although the particle filter has larger number of particles than the previous case. 6 Conclusion This work presents a method of approximating the noise term in stochastic dynamical systems using KL expansion. It is proved that the solution of the resulting approximate dynamical system converge to the true solution in mean square sense. A technique to propagate uncertainty in SDEs combining the above mentioned method and PF operator is proposed. A nonlinear estimation technique is developed using the KLPF uncertainty propagation methodology, which is then applied to estimate states of a nonlinear system and its performance is compared with particle filtering based estimation techniques. It is observed that the developed estimation technique is able to predict the evolution of posterior densities better than particle filters. Moreover, for the hypersonic reentry problem, the KLPF based estimator outperformed the sequential Monte-Carlo based methods in terms of reducing the variance and capturing the localization of uncertainty. We conclude that explicitly accounting prior dynamics via KLPF formulation, enables substantial filtering performance improvement over commonly used particle filtering technique. References 1] A. Halder and R. Bhattacharya. Beyond Monte Carlo: A computational framework for uncertainty propagation in planetary entry, descent and landing. In AIAA Guidance, Navigation and Control Conference, Toronto, ON, 1. ] M. Abramowitz and I.A. Stegun. Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55. Dover publications,

18 hal , version 1-14 Jan 13 θ(degrees E) λ(degrees) (a) Top row- particle filter, bottom row- KLPF estimator. λ(degrees) r(degrees E) 3 λ(degrees) λ(degrees) (b) Top row- particle filter, bottom row- KLPF estimator. Fig. 8. Plots for the final posterior bivariate marginal PDFs obtained from KLPF estimator and particle filter with 3 particles. The darker regions represent lower PDF value and the lighter regions higher. 18

19 r (km) λ (degrees N) θ (degrees E) v (km/s).5 γ (degrees) 3 1 ξ (degrees) hal , version 1-14 Jan 13 Fig. 9. Final posterior unidimensional marginal PDF for all states obtained from KLPF estimator (solid line) with 3 particles and particle filter (hyphenated line) with 5 particles. 3] M.S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for online nonlinear/non-gaussian Bayesian tracking. Signal Processing, IEEE Transactions on, 5(): ,. 4] A. Bain and D. Crişan. Fundamentals of stochastic filtering, volume 6. Springer Verlag, 8. 5] C.T.H. Baker, C.A.H. Paul, and DR Willé. Issues in the numerical solution of evolutionary delay differential equations. Advances in Computational Mathematics, 3(1): , ] R. Bellman. Dynamic Programming. Princeton, NJ: Princeton Univ. Press, ] VE Bene s. Exact finite-dimensional filters for certain diffusions with nonlinear drift. Stochastics: An International Journal of Probability and Stochastic Processes, 5(1-):65 9, ] K.P. Bollino, I.M. Ross, and D.D. Doman. Optimal nonlinear feedback guidance for reentry vehicles. In AIAA Guidance, Navigation and Control Conference, 6. 9] A.E. Bryson and Y.C. Ho. Applied optimal control: Optimization, estimation, and control. Hemisphere Pub, ] H.J. Bungartz and M. Griem. Sparse grids. Acta Numerica, 13(1):147 69, 4. 11] G. Casella and C.P. Robert. Rao-Blackwellisation of sampling schemes. Biometrika, 83(1):81 94, ] J.S. Chern and N.X. Vinh. Optimum reentry trajectories of a lifting vehicle. Technical Report NASA-CR-336, NASA, ] S. Chib and E. Greenberg. Understanding the Metropolis-Hastings algorithm. American Statistician, pages , ] N. Cressie. Statistics for spatial data. Terra Nova, 4(5): , ] D. Crisan. A direct computation of the Bene s filter conditional density. Stochastics: An International Journal of Probability and Stochastic Processes, 55(1-):47 54, ] J.A. Cuesta and C. Matran. Notes on the Wasserstein metric in Hilbert spaces. The Annals of Probability, pages , ] F. Daum, J. Huang, and R. Co. Curse of dimensionality and particle filters. In Proceedings of IEEE Aerospace Conference, pages , 3. 18] F. Daum and M. Krichman. Non-particle filters. In Proceedings of SPIE, volume 636, page 63614, 6. 19] M. Di Paola and A. Sofi. Approximate solution of the Fokker-Planck-Kolmogorov equation. Probabilistic Engineering Mechanics, 17(4): ,. ] A. Doucet, S.J. Godsill, and C. Andrieu. On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 1(3):197 8,. 1] P. Dutta and R. Bhattacharya. Hypersonic state estimation using the Frobenius-Perron operator. Journal of Guidance, Control and Dynamics, 34():35 344, 11. ] Parikshit Dutta, Halder Abhishek, and Bhattacharya Raktim. Uncertainty quantification for stochastic nonlinear systems using Perron Frobenius operator and Karhunen Lo eve expansion. In IEEE Multiconference in Systems and Control, October, 1, Dubrovnik, Croatia, October 1. IEEE. 19

20 hal , version 1-14 Jan 13 θ(degrees E) λ(degrees) (a) Top row- particle filter, bottom row- KLPF estimator. λ(degrees) r(degrees E) 3 λ(degrees) λ(degrees) (b) Top row- particle filter, bottom row- KLPF estimator. Fig. 1. Plots for the final posterior bivariate marginal PDFs obtained from KLPF estimator with 3 particles and particle filter with 5 particles. The darker regions represent lower PDF value and the lighter regions higher.

21 hal , version 1-14 Jan 13 3] L.C. Evans. Partial differential equations. Graduate Studies in Mathematics, 19, ] TD Frank. Delay Fokker-Planck equations, perturbation theory, and data analysis for nonlinear stochastic systems with time delays. Physical Review E, 71(3):3116, 5. 5] P. Frauenfelder, C. Schwab, and R.A. Todor. Finite elements for elliptic problems with stochastic coefficients. Computer methods in applied mechanics and engineering, 194(-5):5 8, 5. 6] J. Garcke and M. Griebel. On the computation of the eigenproblems of hydrogen and helium in strong magnetic and electric fields with the sparse grid combination technique. Journal of Computational Physics, 165(): ,. 7] R.G. Ghanem and P.D. Spanos. Stochastic finite elements: A spectral approach. Dover Pubns, 3. 8] W.R. Gilks and C. Berzuini. Following a moving target? Monte Carlo inference for dynamic Bayesian models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(1):17 146, 1. 9] N.J. Gordon, D.J. Salmond, and A.F.M. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. In IEE Proceedings on Radar and Signal Processing, volume 14, pages , ] M. Grigoriu. Stochastic calculus: Applications in science and engineering. Birkhauser,. 31] A. Halder and R. Bhattacharya. Performance bounds for dispersion analysis: A comparison between Monte Carlo and Perron- Frobenius operator approach. Preprint available at ahalder/. 3] A. Halder and R. Bhattacharya. Dispersion analysis in hypersonic flight during planetary entry using stochastic Liouville equation. Journal of Guidance, Control and Dynamics, 34(): , ] A. Halder and R. Bhattacharya. Model validation: A probabilistic formulation. In Decision and Control and European Control Conference (CDC-ECC), 11 5th IEEE Conference on, pages , December ] M. Kac and A.J.F. Siegert. An explicit representation of a stationary Gaussian process. The Annals of Mathematical Statistics, 18(3):438 44, ] K. Karhunen. Über lineare methoden in der wahrscheinlichkeitsrechnung. Ann. Acad. Sci. Fennicae. Ser. A. I. Math.-Phys, 37:1 79, ] K. Kikuchi, M. Yoshida, T. Maekawa, and H. Watanabe. Metropolis Monte Carlo method as a numerical technique to solve the Fokker Planck equation. Chemical Physics Letters, 185(3-4): , ] A.N. Kolmogorov. On analytical methods in the theory of probability. Uspehi. Mat. Nauk, 1938:5 41, ] D.D. Kosambi. Statistics in function space. J. Indian Math. Soc, 7(1):76 88, ] M. Kumar, S. Chakravorty, and J.L. Junkins. A semianalytic meshless approach to the transient Fokker-Planck equation. Probabilistic Engineering Mechanics, 5(3):33 331, 1. 4] M. Kumar, P. Singla, S. Chakravorty, and J.L. Junkins. The partition of unity finite element approach to the stationary Fokker- Planck equation. In AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Keystone, CO, August 6. AIAA. Paper ] H.P. Langtangen. A general numerical solution method for Fokker-Planck equations with applications to structural reliability. Probabilistic Engineering Mechanics, 6(1):33 48, ] A. Lasota and M.C. Mackey. Chaos, fractals, and noise: Stochastic aspects of dynamics, volume 97. Springer, ] A. Leon-Garcia. Probability, Statistics, and Random Processes for Electrical Engineering. Prentice Hall, ] J.S. Liu and R. Chen. Sequential Monte Carlo methods for dynamic systems. Journal of the American Statistical Association, 93(443):13 144, ] M. Loéve. Fonctions aleatories du second ordre. Supplement to P. Lévy, Processus Stochastic et Mouvement Brownien, Gauthier- Villars, Paris ] A. Longtin. Stochastic delay-differential equations. Complex Time-Delay Systems, pages , 1. 47] J. Lumley. Stochastic tools in turbulence. Academic Press, New York., ] C.S. Manohar and D. Roy. Monte Carlo filters for identification of nonlinear structural dynamical systems. Sadhana Acad. Proc. Eng. Sciences, 31:399 47, 6. 49] A. Masud and L.A. Bergman. Application of multi-scale finite element methods to the solution of the Fokker-Planck equation. Computer Methods in Applied Mechanics and Engineering, 194(1-16): , 5. 5] B.K. Øksendal. Stochastic differential equations: An introduction with applications. Springer Verlag, 3. 51] N. Oudjane and C. Musso. Progressive correction for regularized particle filters. In Proceedings of the Third International Conference on Information Fusion, volume, pages THB 1. IEEE,. 5] H. Risken. The Fokker-Planck equation: Methods of solution and applications, volume 18. Springer Verlag, ] B. Ristic, S. Arulampalam, and N. Gordon. Beyond the Kalman filter: Particle filters for tracking applications. Artech House Publishers, 4. 54] L. Rüschendorf. The Wasserstein distance and approximation theorems. Probability Theory and Related Fields, 7(1):117 19, ] J. Rust. Using randomization to break the curse of dimensionality. Econometrica, 65(3): , ] C. Schwab and R.A. Todor. Karhunen-Loéve approximation of random fields by generalized fast multipole methods. Journal of Computational Physics, 17(1):1 1, 6. 1

Uncertainty Quantification for Stochastic Nonlinear Systems using Perron-Frobenius Operator and Karhunen-Loève Expansion

Uncertainty Quantification for Stochastic Nonlinear Systems using Perron-Frobenius Operator and Karhunen-Loève Expansion Uncertainty Quantification for Stochastic Nonlinear Systems using Perron-Frobenius Operator and Karhunen-Loève Expansion Parikshit Dutta, Abhishek Halder and Raktim Bhattacharya Abstract In this paper,

More information

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering With Polynomial Chaos Raktim Bhattacharya Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering with PC Problem Setup. Dynamics: ẋ = f(x, ) Sensor Model: ỹ = h(x)

More information

. Frobenius-Perron Operator ACC Workshop on Uncertainty Analysis & Estimation. Raktim Bhattacharya

. Frobenius-Perron Operator ACC Workshop on Uncertainty Analysis & Estimation. Raktim Bhattacharya .. Frobenius-Perron Operator 2014 ACC Workshop on Uncertainty Analysis & Estimation Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. uq.tamu.edu

More information

NEW ALGORITHMS FOR UNCERTAINTY QUANTIFICATION AND NONLINEAR ESTIMATION OF STOCHASTIC DYNAMICAL SYSTEMS. A Dissertation PARIKSHIT DUTTA

NEW ALGORITHMS FOR UNCERTAINTY QUANTIFICATION AND NONLINEAR ESTIMATION OF STOCHASTIC DYNAMICAL SYSTEMS. A Dissertation PARIKSHIT DUTTA NEW ALGORITHMS FOR UNCERTAINTY QUANTIFICATION AND NONLINEAR ESTIMATION OF STOCHASTIC DYNAMICAL SYSTEMS A Dissertation by PARIKSHIT DUTTA Submitted to the Office of Graduate Studies of Texas A&M University

More information

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute

More information

A Note on Auxiliary Particle Filters

A Note on Auxiliary Particle Filters A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

Inferring biological dynamics Iterated filtering (IF)

Inferring biological dynamics Iterated filtering (IF) Inferring biological dynamics 101 3. Iterated filtering (IF) IF originated in 2006 [6]. For plug-and-play likelihood-based inference on POMP models, there are not many alternatives. Directly estimating

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

Model Validation: A Probabilistic Formulation

Model Validation: A Probabilistic Formulation 2011 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 12-15, 2011 Model Validation: A Probabilistic Formulation Abhishe Halder and Ratim

More information

New Physical Principle for Monte-Carlo simulations

New Physical Principle for Monte-Carlo simulations EJTP 6, No. 21 (2009) 9 20 Electronic Journal of Theoretical Physics New Physical Principle for Monte-Carlo simulations Michail Zak Jet Propulsion Laboratory California Institute of Technology, Advance

More information

New Advances in Uncertainty Analysis and Estimation

New Advances in Uncertainty Analysis and Estimation New Advances in Uncertainty Analysis and Estimation Overview: Both sensor observation data and mathematical models are used to assist in the understanding of physical dynamic systems. However, observational

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Stationary Priors for Bayesian Target Tracking

Stationary Priors for Bayesian Target Tracking Stationary Priors for Bayesian Target Tracking Brian R. La Cour Applied Research Laboratories The University of Texas at Austin Austin, Texas, U.S.A. Email: blacour@arlut.utexas.edu 969 Abstract A dynamically-based

More information

IN particle filter (PF) applications, knowledge of the computational

IN particle filter (PF) applications, knowledge of the computational Complexity Analysis of the Marginalized Particle Filter Rickard Karlsson, Thomas Schön and Fredrik Gustafsson, Member IEEE Abstract In this paper the computational complexity of the marginalized particle

More information

RESEARCH ARTICLE. Online quantization in nonlinear filtering

RESEARCH ARTICLE. Online quantization in nonlinear filtering Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,

More information

Non-particle filters

Non-particle filters Non-particle filters Fred Daum & Misha Krichman Raytheon Company 225 Presidential Way Woburn, MA 01801 ABSTRACT We have developed a new nonlinear filter that is superior to particle filters in five ways:

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Particle Filters: Convergence Results and High Dimensions

Particle Filters: Convergence Results and High Dimensions Particle Filters: Convergence Results and High Dimensions Mark Coates mark.coates@mcgill.ca McGill University Department of Electrical and Computer Engineering Montreal, Quebec, Canada Bellairs 2012 Outline

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

Homogenization with stochastic differential equations

Homogenization with stochastic differential equations Homogenization with stochastic differential equations Scott Hottovy shottovy@math.arizona.edu University of Arizona Program in Applied Mathematics October 12, 2011 Modeling with SDE Use SDE to model system

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

A new unscented Kalman filter with higher order moment-matching

A new unscented Kalman filter with higher order moment-matching A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This

More information

Data assimilation as an optimal control problem and applications to UQ

Data assimilation as an optimal control problem and applications to UQ Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017

More information

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm On the Optimal Scaling of the Modified Metropolis-Hastings algorithm K. M. Zuev & J. L. Beck Division of Engineering and Applied Science California Institute of Technology, MC 4-44, Pasadena, CA 925, USA

More information

Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN

Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables Revised submission to IEEE TNN Aapo Hyvärinen Dept of Computer Science and HIIT University

More information

Seminar: Data Assimilation

Seminar: Data Assimilation Seminar: Data Assimilation Jonas Latz, Elisabeth Ullmann Chair of Numerical Mathematics (M2) Technical University of Munich Jonas Latz, Elisabeth Ullmann (TUM) Data Assimilation 1 / 28 Prerequisites Bachelor:

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Geodesic Density Tracking with Applications to Data Driven Modeling

Geodesic Density Tracking with Applications to Data Driven Modeling Geodesic Density Tracking with Applications to Data Driven Modeling Abhishek Halder and Raktim Bhattacharya Department of Aerospace Engineering Texas A&M University College Station, TX 77843-3141 American

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term 1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes

More information

State Space Representation of Gaussian Processes

State Space Representation of Gaussian Processes State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)

More information

A Polynomial Chaos Approach to Robust Multiobjective Optimization

A Polynomial Chaos Approach to Robust Multiobjective Optimization A Polynomial Chaos Approach to Robust Multiobjective Optimization Silvia Poles 1, Alberto Lovison 2 1 EnginSoft S.p.A., Optimization Consulting Via Giambellino, 7 35129 Padova, Italy s.poles@enginsoft.it

More information

Random Fields in Bayesian Inference: Effects of the Random Field Discretization

Random Fields in Bayesian Inference: Effects of the Random Field Discretization Random Fields in Bayesian Inference: Effects of the Random Field Discretization Felipe Uribe a, Iason Papaioannou a, Wolfgang Betz a, Elisabeth Ullmann b, Daniel Straub a a Engineering Risk Analysis Group,

More information

Particle filters, the optimal proposal and high-dimensional systems

Particle filters, the optimal proposal and high-dimensional systems Particle filters, the optimal proposal and high-dimensional systems Chris Snyder National Center for Atmospheric Research Boulder, Colorado 837, United States chriss@ucar.edu 1 Introduction Particle filters

More information

Karhunen-Loève Expansions of Lévy Processes

Karhunen-Loève Expansions of Lévy Processes Karhunen-Loève Expansions of Lévy Processes Daniel Hackmann June 2016, Barcelona supported by the Austrian Science Fund (FWF), Project F5509-N26 Daniel Hackmann (JKU Linz) KLE s of Lévy Processes 0 / 40

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 2010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 2010 WeC17.1 Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 (1) Graduate Student, (2) Assistant

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

Soil Uncertainty and Seismic Ground Motion

Soil Uncertainty and Seismic Ground Motion Boris Jeremić and Kallol Sett Department of Civil and Environmental Engineering University of California, Davis GEESD IV May 2008 Support by NSF CMMI 0600766 Outline Motivation Soils and Uncertainty Stochastic

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline. MFM Practitioner Module: Risk & Asset Allocation September 11, 2013 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

D (1) i + x i. i=1. j=1

D (1) i + x i. i=1. j=1 A SEMIANALYTIC MESHLESS APPROACH TO THE TRANSIENT FOKKER-PLANCK EQUATION Mrinal Kumar, Suman Chakravorty, and John L. Junkins Texas A&M University, College Station, TX 77843 mrinal@neo.tamu.edu, schakrav@aeromail.tamu.edu,

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007 Particle Filtering a brief introductory tutorial Frank Wood Gatsby, August 2007 Problem: Target Tracking A ballistic projectile has been launched in our direction and may or may not land near enough to

More information

Sequential Monte Carlo Methods in High Dimensions

Sequential Monte Carlo Methods in High Dimensions Sequential Monte Carlo Methods in High Dimensions Alexandros Beskos Statistical Science, UCL Oxford, 24th September 2012 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Andrew Stuart Imperial College,

More information

Introduction to asymptotic techniques for stochastic systems with multiple time-scales

Introduction to asymptotic techniques for stochastic systems with multiple time-scales Introduction to asymptotic techniques for stochastic systems with multiple time-scales Eric Vanden-Eijnden Courant Institute Motivating examples Consider the ODE {Ẋ = Y 3 + sin(πt) + cos( 2πt) X() = x

More information

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France. AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER Qi Cheng and Pascal Bondon CNRS UMR 8506, Université Paris XI, France. August 27, 2011 Abstract We present a modified bootstrap filter to draw

More information

Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation

Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation Acta Polytechnica Hungarica Vol. 14, No. 5, 217 Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation Roksana Brodnicka, Henryk Gacki Institute of Mathematics, University of Silesia

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS

CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS Puneet Singla Assistant Professor Department of Mechanical & Aerospace Engineering University at Buffalo, Buffalo, NY-1426 Probabilistic Analysis

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Particle Filter Track Before Detect Algorithms

Particle Filter Track Before Detect Algorithms Particle Filter Track Before Detect Algorithms Theory and Applications Y. Boers and J.N. Driessen JRS-PE-FAA THALES NEDERLAND Hengelo The Netherlands Email: {yvo.boers,hans.driessen}@nl.thalesgroup.com

More information

Bayesian room-acoustic modal analysis

Bayesian room-acoustic modal analysis Bayesian room-acoustic modal analysis Wesley Henderson a) Jonathan Botts b) Ning Xiang c) Graduate Program in Architectural Acoustics, School of Architecture, Rensselaer Polytechnic Institute, Troy, New

More information

Introduction to Computational Stochastic Differential Equations

Introduction to Computational Stochastic Differential Equations Introduction to Computational Stochastic Differential Equations Gabriel J. Lord Catherine E. Powell Tony Shardlow Preface Techniques for solving many of the differential equations traditionally used by

More information

Temporal-Difference Q-learning in Active Fault Diagnosis

Temporal-Difference Q-learning in Active Fault Diagnosis Temporal-Difference Q-learning in Active Fault Diagnosis Jan Škach 1 Ivo Punčochář 1 Frank L. Lewis 2 1 Identification and Decision Making Research Group (IDM) European Centre of Excellence - NTIS University

More information

Weakly interacting particle systems on graphs: from dense to sparse

Weakly interacting particle systems on graphs: from dense to sparse Weakly interacting particle systems on graphs: from dense to sparse Ruoyu Wu University of Michigan (Based on joint works with Daniel Lacker and Kavita Ramanan) USC Math Finance Colloquium October 29,

More information

Handbook of Stochastic Methods

Handbook of Stochastic Methods C. W. Gardiner Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences Third Edition With 30 Figures Springer Contents 1. A Historical Introduction 1 1.1 Motivation I 1.2 Some Historical

More information

Numerical Solutions of ODEs by Gaussian (Kalman) Filtering

Numerical Solutions of ODEs by Gaussian (Kalman) Filtering Numerical Solutions of ODEs by Gaussian (Kalman) Filtering Hans Kersting joint work with Michael Schober, Philipp Hennig, Tim Sullivan and Han C. Lie SIAM CSE, Atlanta March 1, 2017 Emmy Noether Group

More information

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Journal of Scientific Computing, Vol. 27, Nos. 1 3, June 2006 ( 2005) DOI: 10.1007/s10915-005-9038-8 Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Xiaoliang Wan 1 and George Em Karniadakis 1

More information

5 Applying the Fokker-Planck equation

5 Applying the Fokker-Planck equation 5 Applying the Fokker-Planck equation We begin with one-dimensional examples, keeping g = constant. Recall: the FPE for the Langevin equation with η(t 1 )η(t ) = κδ(t 1 t ) is = f(x) + g(x)η(t) t = x [f(x)p

More information

Evolution Strategies Based Particle Filters for Fault Detection

Evolution Strategies Based Particle Filters for Fault Detection Evolution Strategies Based Particle Filters for Fault Detection Katsuji Uosaki, Member, IEEE, and Toshiharu Hatanaka, Member, IEEE Abstract Recent massive increase of the computational power has allowed

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Lecture 9: Sensitivity Analysis ST 2018 Tobias Neckel Scientific Computing in Computer Science TUM Repetition of Previous Lecture Sparse grids in Uncertainty Quantification

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information