Uncertainty Propagation for Nonlinear Dynamical Systems using Gaussian Mixture Models
|
|
- Merryl Shields
- 5 years ago
- Views:
Transcription
1 Uncertainty Propagation for Nonlinear Dynamical Systems using Gaussian Mixture Models Gabriel Terejanu, Puneet Singla, Tarunraj Singh, Peter D. Scott A Gaussian mixture model approach is proposed for accurate uncertainty propagation through a general nonlinear system. The transition probability density function, is approximated by a finite sum of Gaussian density functions whose parameters (mean and covariance) are propagated using linear propagation theory. Two different approaches are introduced to update the weights of different components of a Gaussian mixture model for uncertainty propagation through nonlinear system. The first method updates the weights such that they minimize the integral square difference between the true forecast probability density function and its Gaussian sum approximation. The second method uses Fokker-Planck equation error as a feedback to adapt for the amplitude of different Gaussian components while solving a quadratic programming problem. The proposed methods are applied to a variety of problems in the open literature, and argued to be an excellent candidate for higher dimensional uncertainty propagation problems. Introduction The state of a stochastic dynamic system, x, is characterized by its time dependent probability density function (pdf), p(t, x). The random nature of the state may be due to the uncertain initial conditions and/or due to the random excitation that is driving the dynamic system. The knowledge about the time evolution of the pdf of a state of a dynamic system is important to quantify the uncertainty in the state at a future time. Numerous fields of science and engineering present the problem of pdf evolution through nonlinear dynamic systems with stochastic excitation and uncertain initial conditions. 1, 2 This is a difficult problem and has received much attention over the past century. One may be interested in the determination of the response of engineering structures such as beams, plates, entire buildings beams under random excitation (in structure mechanics 3 ), or the propagation of initial condition uncertainty of an asteroid for the determination of its probability of collision with a planet (in astrodynamics 4 ), or the motion of particles under the influence of stochastic force fields (in particle physics 5 ), or simply the computation of the prediction step in the design of a Bayes filter (in filtering PhD Student, Department of Computer Science & Engineering, University at Buffalo, Buffalo, NY-1426, terejanu@buffalo.edu. Assistant Professor, AAS Member, Department of Mechanical & Aerospace Engineering, University at Buffalo, Buffalo, NY-1426, psingla@buffalo.edu. Professor, Department of Mechanical & Aerospace Engineering, University at Buffalo, Buffalo, NY-1426, tsingh@buffalo.edu. Professor, Department of Computer Science & Engineering, University at Buffalo, Buffalo, NY-1426, peter@buffalo.edu. 1 of 24
2 theory 6 ). All these applications require the study of the time evolution of the pdf, p(t, x), corresponding to the state, x, of the relevant dynamic system. For nonlinear systems, the exact description of the transition pdf is provided by a linear partial differential equation (pde) known as the Fokker Planck Kolmogorov Equation (FPKE), or simply the Fokker Planck Equation (FPE). 2 Analytical solutions exists only for stationary pdf and are restricted to a limited class of dynamical systems. 1, 2 Thus researchers are looking actively at numerical approximations to solve the Fokker-Planck equation, 7 1 generally using the variational formulation of the problem. The Finite Difference Method (FDM) and the Finite Element Method (FEM) have been used successfully for two and even three dimensional systems. However, these methods are severely handicapped for higher dimensions because the generation of meshes for spaces beyond three dimensions is still impractical. Furthermore, since these techniques rely on the FPE can only be applied to continuous-time dynamical systems. For discrete-time dynamical systems there is no equation to describe the evolution of the actual transition pdf, therefore we can only rely on approximations in this case. Several other approximate techniques exist in the literature to attack the pdf evolution, the most popular being Monte Carlo methods, 11 Gaussian closure 12 (or higher order moment closure), Equivalent Linearization, 13 and Stochastic Averaging. 14, 15 Monte Carlo methods require extensive computational resources and effort, and becomes increasing infeasible for high-dimensional dynamic systems. The latter are similar in several respects, and they are suitable only for linear or moderately nonlinear systems, because the effect of higher order terms can lead to significant errors. Furthermore, all these approaches provide only an approximate description of the uncertainty propagation problem by restricting the solution to a small number of parameters - for instance, the first N moments of the sought pdf. The present paper is concerned with improving the approximation to the forecast density function using a finite Gaussian mixture. With a sufficient number of Gaussian components, any pdf may be approximated as closely as desired. Many algorithms 16, 17 have been discussed in literature for time evolution of the initial state pdf through a nonlinear dynamical system in discrete-time or continuous-time. In conventional methods, the weights of different components of a Gaussian mixture are kept constant while propagating the uncertainty through a nonlinear system and are updated only in the presence of measurement data. 16, 17 This assumption is valid if the underlying dynamics is linear or the system is marginally nonlinear. The same is not true for the general nonlinear case and new estimates of weights are required for accurate propagation of the state pdf. However, the existing literature provides no means for adaptation of the weights of different Gaussian components in the mixture model during the propagation of state pdf. The lack of adaptive algorithms for weights of Gaussian mixture are felt to be a serious disadvantages of existing algorithms and provide the motivation for this paper. 2 of 24
3 In this paper, two different schemes are introduced to update the weights corresponding to different components of Gaussian mixture model during pure propagation. The first method updates the forecast weights by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. The second method updates the weights by constraining the Gaussian sum approximation to satisfy the Fokker-Planck equation for continuous-time dynamical systems. Both the methods leads to a convex quadratic programming problem and hence, are computationally efficient. The structure of paper is as follows: First, an introduction to conventional Gaussian sum approximation is presented followed by two new schemes for updating the weights of different components of Gaussian mixture. Finally, several numerical examples are considered to illustrate the efficacy of the proposed methods. Update Scheme for Discrete-Time Dynamic Systems Problem Statement Consider the following nonlinear discrete-time dynamical system driven by white noise, x k+1 = f(k, x k ) + η k (1) η k N (, Q k ) (2) with the probability density function of the initial conditions given by the following Gaussian sum, where p(t, x ) = N wkn i (x k µ i k, P i k) (3) N (x µ, P) = 2πP 1/2 exp [ 1 ] 2 (x µ)t P 1 (x µ) (4) We are interested in approximating the probability density function p(t k+1, x k+1 ) as a Gaussian mixture. A linear mapping, will transform a Gaussian mixture into another Gaussian mixture where the parameters of the mixands are easily computed. But the outcome of a Gaussian that undergoes a nonlinear transformation is generally non-gaussian. By linearizing the nonlinear transformation, a Gaussian sum approximation to the forecast density function p(t k+1, x k+1 ) is obtained, N ˆp(t k+1, x k+1 ) = wk+1n i (x k+1 µ i k+1, P i k+1) (5) where the reference values of the parameters of the Gaussian components are given by the time update step 3 of 24
4 of the Kalman filter: w i k+1 = w i k (6) µ i k+1 = f(k, µ i k) (7) P i k+1 = A k (µ i k)p i ka T k (µ i k) + Q k (8) where A k (x k ) = f(k, x k) x k (9) Other evolution schemes other than linearization using Taylor series, such as statistical linearization, may be used for obtaining the moments of the Gaussian components. Since they imply linearizations, such approximations are computational convenient and may be easily used in linear applications which have been intensively studied. The reason that the weights are not changed is because it is assumed that the covariances are small enough 16 such that the linearizations become representative for the dynamics around the means. To better understand this assumption, consider that the dynamical system is not driven by noise, the nonlinear mapping is invertible, and we want to propagate our Gaussian Sum for just one time step. Then the unnormalized density function 18 after one iteration, is given by: p(t k+1, x k+1 ) = p(t k, x k ) deta k (x ) 1 (1) N = wkn i (x k µ i k, P i k) deta k (x k ) 1 (11) Assuming that all covariances are small enough such that the linearization around a mean is representative for the dynamics in the vicinity of the respective mean, then N p(t k+1, x k+1 ) wkn i (x k µ i k, P i k) detak (µ i k) 1 (12) N = wkn i (x k+1 µ i k+1, P i k+1) (13) In practice this assumption may be easily violated resulting in a poor approximation of the forecast pdf. Practically, the dynamic system may exhibit strong nonlinearities and the total number of Gaussian components, needed to represent the pdf, may be restricted due to computational requirements. The existing literature provides no means for adaption of the weights of different Gaussian components in the mixture model during the propagation of the state pdf. The lack of adaptive means for updating the weights of Gaussian mixture are felt to be a serious disadvantage of existing algorithms and provides the motivation for this paper. 4 of 24
5 Weight Update I A better approximation of the forecast pdf may be obtained by updating the forecast weights. The new weights may be obtained in the least square sense to minimize the following integral square difference between the true density and its approximation: min w i k+1 s.t. 1 p(tk+1, x k+1) ˆp(t k+1, x k+1) 2 dx k+1 (14) 2 N wk+1 i = 1 w i k+1, i = 1,, N Here, no a priori structure is used for the true density function p(t k+1, x k+1 ), it is completely unknown. The expression (1) is only used to motivate the introduction of an update scheme for the forecast weights. By substituting (5) in (14) and expanding and grouping the terms in the cost function w.r.t. the weights, the new cost function is given by, J = 1 2 wt k+1mw k+1 w T k+1y (15) where w k+1 = [w 1 k+1 w2 k+1... wn k+1 ]T, and M N N is a symmetric matrix given by: M N N = M(x k+1 )M T (x k+1 )dx k+1 (16) where M is a N 1 vector that contains all the Gaussian components at time k + 1: M(x k+1 ) = T [ N (x k+1 µ 1 k+1, P 1 k+1) N (x k+1 µ 2 k+1, P 2 k+1)... N (x k+1 µ N k+1, Pk+1)] N (17) Thus, the components of matrix M are easily given by the product rule of two Gaussian density functions which yields another Gaussian density function. By integrating the product we are left only with the 5 of 24
6 normalization constant: m ij = N (x k+1 µ i k+1, P i k+1)n (x k+1 µ j k+1, Pj k+1 ) dx k+1, i j (18) = N (µ i k+1 µ j k+1, Pi k+1 + P j k+1 ) ( ) N x k+1 P ij k+1[ (P i k+1 ) 1 µ i k+1 + (P j k+1 ) 1 µ j ] k+1, P ij k+1 dx k+1 = N (µ i k+1 µ j k+1, Pi k+1 + P j k+1 ) [ = 2π(P i k+1 + P j k+1 ) 1/2 exp 1 ] 2 (µi k+1 µ j k+1 )T (P i k+1 + P j k+1 ) 1 (µ i k+1 µ j k+1 ) (19) m ii = N (µ i k+1 µ i k+1, P i k+1 + P i k+1) = 4πP i k+1 1/2 where P ij k+1 = [ (P i k+1) 1 + (P j k+1 ) 1] 1 (2) The components of the vector y N 1 are given by: y i = p(t k+1, x k+1 )N (x k+1 µ i k+1, P i k+1) dx k+1, i = 1... N (21) = p(t k, x k )N (f(k, x k ) µ i k+1, P i k+1) dx k (22) ˆp(t k, x k )N (f(k, x k ) µ i k+1, P i k+1) dx k (23) N = N (x k µ j k, Pj k )N (f(k, x k) µ i k+1, P i k+1) dx k (24) w j k j=1 The transition from (21) to (22) is given by the Law of the Unconscious Statistician, avoiding this way to deal with the unknown true forecast pdf, at a cost of computing the expectation of a composite function (22). We assume that we have a good Gaussian sum approximation at time t k such that ˆp(t k, x k ) p(t k, x k ). If this is not the case, we can use the Law of the Unconscious Statistician (LOTUS) 19 until we reach a good approximation at an earlier time t k p. y i = = = p(t k, x k )N (f(k, x k ) µ i k+1, P i k+1) dx k (25) p(t k 1, x k 1 )N (f(k, f(k 1, x k 1 ) µ i k+1, P i k+1) dx k 1 (26) p(t k p, x k p )N (f f... f(k p, x k p ) µ i k+1, P i k+1) dx k p (27) 6 of 24
7 Using (24), the cost function may be rewritten given the new y as, J = 1 2 wt k+1mw k+1 w T k+1nw k (28) where w k = [w 1 k w2 k... wn k ]T is the prior weight vector, and the matrix N N N has the following components: n ij = N (f(k, x k ) µ i k+1, P i k+1)n (x k µ j k, Pj k ) dx k (29) = E N (xk µ j k,pj k ) [ N (f(k, xk ) µ i k+1, P i k+1) ] (3) The expectations (3) may be computed using Gaussian Quadrature, Monte Carlo integration or Unscented Transformation. 2 While in lower dimensions the unscented transformation is mostly equivalent with Gaussian quadrature, in higher dimensions the unscented transformation is computational more appealing in evaluating integrals since the number of points only grows linearly with the number of dimensions. However, this comes with a loss in accuracy, 21 hence the need for a larger number of points 22 to capture additional information. In the case of linear transformation M = N, hence no change will occur to the weights. However, the approximations made in computing the expectations (3) may not give a perfect match between the two matrices, thus deteriorating the overall results in this case. Hence, such an update scheme may be recommended only for nonlinear systems. The final formulation of the optimization (14) may be posed in the quadratic programming framework and solved using readily available solvers: min J = 1 w k+1 2 wt k+1mw k+1 w T k+1nw k (31) s.t. 1 T N 1w k+1 = 1 w k+1 N 1 Here 1 N 1 is a vector of ones and N 1 is a vector of zeros. Now, if we show that the matrix M is a positive semi-definite and the cost function J is lower bounded, then the aforementioned optimization problem can be posed as a convex optimization problem and we are guaranteed to have a unique solution. 23 Notice that the integrand M(x i k+1 )MT (x i k+1 ) is a rank one symmetric positive semi-definite matrix with non-zero eigenvalue equal to M T (x i k+1 )M(xi k+1 ). Further, we write the integral (16) as a infinite sum as follows: M = i M(x i k+1)m T (x i k+1) (32) 7 of 24
8 Making use of the fact that summation of two positive semi-definite matrix is also a positive semi-definite matrix, we conclude that matrix M is a positive semi-definite matrix since M is a sum of positive semidefinite matrices. Further, notice that the linear term w T k+1 Nw k in cost function J is lower bounded since N and both w k+1 and w k have to lie between and 1. Hence, we can conclude that the cost function J is always bounded below and we are guaranteed to have a unique solution. Update Scheme for Continuous-Time Dynamic Systems Problem Statement Consider a general n-dimensional noise driven nonlinear dynamic system with uncertain initial conditions, given by the equation: ẋ = f(t, x) + g(t, x)γ(t) (33) where, Γ(t) represents a Gaussian white noise process with the correlation function Qδ(t 1 t 2 ), and the initial state uncertainty is captured by the pdf p(t, x). Then, the time evolution of p(t, x) is described by the following FPE, which is a second order pde in p(t, x): t p(t, x) = L FPp(t, x) (34) where, n L FP = D (1) i (.) + x i n j=1 D (1) (t, x) = f(t, x) n 2 D (2) ij x i x (.) (35) j g(t, x) Qg(t, x) (36) x D (2) (t, x) = 1 2 g(t, x)qgt (t, x) (37) issues: Despite its innocuous appearance, the FPE is a formidable problem to solve, because of the following 1. Positivity of the pdf: p(t, x), t & x 2. Normalization constraint of the pdf: p(t, x)dv = No fixed Solution Domain: how to impose boundary conditions in a finite region and restrict numerical computation to regions where p > of 24
9 Let us assume that underlying pdf can be approximated by a finite sum of Gaussian pdfs N ˆp(t, x) = w i p gi where p gi = N (x(t) µ i, P i ) (38) where µ i and P i represent the mean and covariance of the i th component of the Gaussian pdf, respectively and w i denotes the amplitude of i th Gaussian in the mixture. The positivity and normalization constraint on the mixture pdf, ˆp(t, x), leads to following constraints on the amplitude vector: N w i = 1, w i (39) Since all the components of the mixture pdf (38) are Gaussian and thus, only estimates of their mean and covariance need to be maintained in order to obtain the optimal state estimates which can be propagated using the celebrated Kalman filter time update equations: µ i = f(µ i ) (4) Ṗ i = A T i P i + P i A i + g(t, µ i )Qg T (t, µ i ) (41) f(t, x) where A i = x (42) x=µi Notice that the weights w i corresponding to each Gaussian component are unknown. In conventional Gaussian sum filter the weights are initialized such that initial mixture pdf approximates the given initial pdf and it is assumed that the weights do not change over time. This assumption is valid if the underlying dynamics is linear or the system is marginally nonlinear. The same is not true for the general nonlinear case and new estimates of weights are required for accurate propagation of the state pdf. Weight Update II In this section, a novel method is described to update the weights of different components of the Gaussian mixture (38). The main idea is that the mixture pdf (38), ˆp(t, x), should satisfies the Fokker-Planck equation (35) and the Fokker-Planck equation error can be used as a feedback to update the weights of different Gaussian components in the mixture pdf. 24 In other words, we seek to minimize the Fokker-Planck equation error under the assumption (38), (4) and (41). 9 of 24
10 The substitution of (38) in (35) leads to e(t, x) = = ˆp(t, x) t ˆp(t, x) t L FP (ˆp(t, x)) (43) n D (1) i (t, x) n n 2 D (2) ij (t, x) + ˆp(t, x) (44) x i x i x j j=1 where L FP ( ) is the so called Fokker-Planck operator, and, ˆp(t, x) t = N w i p g i µ i T µ i + n n p gi P ijk P j=1 ijk k=1 (45) where P ijk is the jk th element of the i th covariance matrix P i. Further, substitution of (36) and (37) along with (45) in (44) leads to N e(t, x) = w i L i (t, x) = L T w (46) where w is a N 1 vector of Gaussian weights, and L i is given by L i (t, x) = p g i µ i T f(µ i ) + n n p gi P j=1 ijk k=1 d (1) j (t, x)p gi 1 x j 2 n k=1 P ijk + n j=1 2 d (2) jk (t, x)p g i x j x k ( f j (t, x) p g i f j (t, x) + p gi x j x j )] (47) d (1) (t, x) and d (2) (t, x) are given as: d (1) (t, x) = 1 2 g(t, x) Qg(t, x) (48) x d (2) (t, x) = 1 2 g(t, x)qgt (t, x) (49) 1 of 24
11 Further, different derivatives in the above equation can be computed using the following analytical formulas: p gi µ i = P 1 i (x µ i ) p gi (5) p gi = p g i P i p g i P i 2 P i P i 2 p gi = p g i 2 P 1 i p g i 2 = p g i 2 P 1 i + p g i (x µ i ) T P 1 i (x µ i ) P i (51) (x µ i ) T P 1 i (x µ i ) P i (52) 2 P 1 i (x µ i ) (x µ i ) T P 1 i (53) x = P 1 i (x µ i ) p gi (54) [ ] 2 p gi xx T = P 1 i I + (x µ i ) p T g i p gi (55) x Now, at a given time instant, after propagating the mean, µ i and the covariance, P i, of individual Gaussian elements using (4) and (41), we seek to update weights by minimizing the FPE equation error over some volume of interest V : 1 min e 2 (t, x)dv (56) w i 2 s.t V N w i = 1 w i, i = 1,, N Since, the Fokker-Planck equation error (46) is linear in the Gaussian weights, w i, hence, the aforementioned problem can be written as a quadratic programming problem. min w 1 2 wt Lw (57) s.t 1 T N 1w = 1 w N 1 where 1 N 1 is a vector of ones, N 1 is a vector of zeros and L is given by L = V L(x)L T (x)dv = L 1 L 1 dv V L 1 L 2 dv V. L 1 L N dv V L 2 L 1 dv L N L 1 dv V L 2 L 2 dv V V L N L 2 dv.... L 2 L N dv L N L N dv V V V (58) 11 of 24
12 Now, if we show that the matrix L is a positive semi-definite, then the aforementioned optimization problem can be posed as a convex optimization problem and we are guaranteed to have a unique solution. Notice that the integrand L(x)L T (x) is a rank one symmetric positive semi-definite matrix with non-zero eigenvalue equal to L T (x)l(x). Further, the integral in the definition of matrix L can be approximated by an infinite sum of positive semi-definite matrix: V L(x)L T (x)dv = i L(x i )L T (x i ) (59) Making use of the fact that summation of two positive semi-definite matrix is also a positive semi-definite matrix, we conclude that matrix L is a positive semi-definite matrix and thus the optimization problem (57) has a unique solution. Notice, to carry out this minimization, we need to evaluate integrals involving Gaussian pdfs over volume V which can be computed exactly for polynomial nonlinearity and in general can be approximated by the Gaussian quadrature method. Numerical Results The two update schemes are compared with the usual procedure of not updating the weights on several dynamical systems where either a closed form solution for the stationary pdf is available or a Monte Carlo simulation is easily carried. For the examples where we have an analytical solution available, the following integral of the absolute error has been used to compare the methods: E = p(t, x) ˆp(t, x) dx (6) The expectation integrals needed by the first approach have been computed using either Unscented Transformation 2 or Continuous Square-Root Unscented Transformation 25 with a 6n+1 sigma point selection scheme. 22 For the second method, the expectations have been computed using Gaussian quadrature method. Example 1 The first update method has been applied on the following nonlinear discrete-time dynamical system with uncertain initial condition given by (62): x k x k+1 = 1 2 x k x 2 k + η k where η k N (, 1) (61) x.1n (.5,.1) +.9N (.5, 1) (62) 12 of 24
13 The moments of the two Gaussian components are propagated for 2 time steps using (7) and (8). Since there is no analytical solution for the forecast pdf, we run a Monte Carlo simulation using 1 samples and compute the histogram of the samples at each time step, Fig. 1(a). Fig. 1(b) shows the pdf approximation without updating the weights and in Fig. 1(c) is plotted the pdf approximation with updated weights. By updating the weights, we are able to better capture the two modes presented in Fig. 1(a). In addition to this, log probability of the monte carlo sample points was computed according to the following relationship: M N L = log w i N (x j µ i, P i ) (63) j=1 Here, M is the total number of samples used in the Monte Carlo approximation. The higher value of L represents a better pdf approximation. Fig. 1(d) shows the plot of log probability of the samples with and without updating the weights of the Gaussian mixture. As expected, updating the weights of the Gaussian mixture leads to higher log probability of the samples. Hence, we conclude that the adaptation of the weights during propagation leads to more accurate pdf approximation than without weight update. Example 2 For the second example we consider the following continuous-time dynamical system with uncertain initial condition given by (65): ẋ = sin(x) + G(t) where Q = 1 (64) x.1n (.5,.1) +.9N (.2, 1) (65) The moments of the two Gaussian components are propagated for 15 sec using (4) and (41). Again, since there is no analytical solution for the transition pdf, we run a Monte Carlo simulation using 1 samples and compute the histogram of the samples at each time step ( t = 1 sec.), Fig. 2(a). Fig. 2(b) shows the pdf approximation without updating the weights and Figs. 2(c) and 2(d) show the plot of approximated pdf using the two methods introduced in this paper. It is clear from these plots that we are able to better capture the transition pdf with the incorporation of weight update scheme. Also, the shape of approximated pdf is in accordance with the histograms generated by Monte-Carlo simulations and is consistent with the behavior of the dynamical system, which has two attractors at π and π. 13 of 24
14 5.4 #points p(x) time step x time step x 5 1 (a) Histogram - approximate true pdf (b) Transition pdf (without Weight Update 2 x p(x) log.prob.data time step x no update weight update time step (c) Transition pdf (without Weight Update) (d) Log probability of the samples Figure 1. Numerical Results Example 1 14 of 24
15 .8 p(x) time x (b) Transition pdf (without Weight Update p(x) p(x) (a) Histogram - approximate true pdf time time x (c) Transition pdf (without Weight Update I) 5 x (d) Transition pdf (without Weight Update II) Figure 2. Numerical Results Example 2 15 of 24
16 Example 3 We consider the following 2-D nonlinear dynamical system for analysis: ẍ + ηẋ + αx + βx 3 = g(t)g(t) (66) Eq. (66) represents a noise driven duffing oscillator with a soft spring (αβ <, η > ), and included damping (to ensure the presence of a stationary solution - a bimodal pdf). For simulation purposes, we use g(t) = 1, Q = 1, η = 1, α = 1, β = 3 and the stationary probability density function (67) is plotted in Fig. 3(b). ( p(x, ẋ) exp 2 η ( α g 2 Q 2 x2 + β 4 x4 + 2ẋ2 1 )) (67) Notice that the stationary pdf is an exponential function of the steady state system Hamiltonian, scaled by the parameter 2 η g 2 Q.1 To approximate the stationary pdf given by (67), we assume the initial pdf to be a mixture of 1 Gaussian pdfs with centers uniformly distributed between ( 5, 5) and (5, 5). The covariance matrix for each Gaussian component is chosen to be P = 1 I 2 2, where I is the 2 2 identity matrix and the initial weights of the Gaussian components are randomly assigned a number between and 1. The weights have been randomly generated in 2 different runs (see Fig. 3(i) for weights/run), and the initial pdf for the last run. is plotted in Fig. 3(a). The center and corresponding covariance matrices of the Gaussian components are linearly propagated for 1 seconds. Fig. 3(c) shows the plot of the approximated pdf without updating the weights of Gaussian mixture and Fig. 3(f) shows the corresponding approximation error plot for the weights of the last run. As expected, the final pdf is biased towards one of the mode of the true pdf due to the fact that initial weights of those components were large. It is clear that although the approximated pdf did capture some of non-gaussian behavior, it fails to capture both the modes accurately. Further, Figs. 3(d) and 3(h) show the plot of the approximated stationary pdfs after updating the weights of different components of the Gaussian mixture using weight update schemes I and II, respectively. It is clear that both methods (Update I and Update II) yield approximately the same pdf with corresponding approximation error given in Figs. 3(g) and 3(h). The integral absolute error (6) for the updated Gaussian sum is lower than the one without weight update. This result is consistent in all 2 runs as shown in Fig. 3(j), where for different initial weights both methods are able to yield approximately same approximation error and considerably lower than without updating the weights. The first method has been applied recursively every second to update the weights and the second method 16 of 24
17 has been applied only once at the end of the simulation. The results of the 2 runs have been averaged and tabulated at the end of the section, Table 1. As expected, with the adaptation of the weights, the approximation errors have been reduced considerably. Example 4 For the fourth example we have considered the following the 2D noise driven quintic oscillator (68): ẍ + ηẋ + x(ɛ 1 + ɛ 2 x 2 + ɛ 3 x 4 ) = g(t)g(t) (68) For simulation purposes, we choose the following parameters: g(t) = 1, Q = 1, η = 1, ɛ 1 = 1, ɛ 2 = 3.65, ɛ 3 = The true stationary pdf (69), plotted in Fig. 4(b), is trimodal in this case with each of the mode centered at three equilibrium points of the oscillator. ( p(x, ẋ) exp 2 η ( ɛ1 g 2 Q 2 x2 + ɛ 2 4 x4 + ɛ 3 6 x6 + 1 )) 2ẋ2 (69) To approximate the stationary pdf, we assume the initial pdf to be a mixture of 11 Gaussian pdfs with centers uniformly distributed between ( 5, 5) and (5, 5). The covariance matrix for each Gaussian component is chosen to be P =.33 I 2 2, and initially all Gaussian components are assumed to weigh equally as shown in Fig. 4(a). The center and corresponding covariance matrices of the Gaussian components are propagated for 1 seconds in accordance with (4) and (41). Fig. 4(c) shows the plot of the approximated pdf without updating the weights of the Gaussian mixture and Fig. 4(f) shows the corresponding approximation error plot. As expected, the final pdf is not able to capture the mode centered at unstable equilibrium point (, ). It is clear that although the approximated pdf did capture some of non-gaussian behavior, it fails to capture all three modes accurately. Figs. 4(d) and 4(e) show the plot of the approximated stationary pdf after updating the weights of different components of the Gaussian mixture model using the first and the second method respectively. Figs. 4(g) and 4(h) show the approximation error of the two methods. Their integral absolute errors of Eq. (6), tabulated at the end of the section, shows that with the adaptation of the weights, the approximation errors have been reduced considerably. Both methods have been applied once at the end of the simulation. 17 of 24
18 (a) Initial pdf (b) True pdf (c) Final pdf (without Weight Update - run #2) (d) Final pdf (with Weight Update I - run #2) (e) Final pdf (with Weight Update II - run #2) (f) Error pdf (without Weight Update - run #2) (g) Error pdf (with Weight Update I - run #2) (h) Error pdf (with Weight Update II - run #2) No Update Update I Update II random weights.15.1 int.abs.err # random run # random run (i) Random Weights Run (j) Integral Absolute Error Run Figure 3. Numerical Results Example 3 18 of 24
19 (a) Initial pdf (b) True pdf (c) Final pdf (without Weight Update) (d) Final pdf (with Weight Update I) (e) Final pdf (with Weight Update II) (f) Error pdf (without Weight Update) (g) Error pdf (with Weight Update I) (h) Error pdf (with Weight Update II) Figure 4. Numerical Results Example 4 19 of 24
20 Example 5 The last example involves the state pdf propagation through the noise driven energy dependent damping oscillator given by the following equation: ẍ + βẋ + x + α(x 2 + ẋ 2 )ẋ = g(t)g(t) (7) For simulation purposes, we choose the following parameters: g(t) = 1, Q = 1/π, α =.125, β =.5, and the stationary probability density function is given by: ( p(x, ẋ) exp η ( 2g 2 β(x 2 + ẋ 2 ) + α 2 (x2 + ẋ 2 ) 2)) (71) Fig. 5(b) shows the plot of true stationary pdf with most of the probability mass centered at the boundary of stable limit cycle in this case. To approximate the stationary pdf, we assume the initial pdf to be a mixture of 1 Gaussian pdfs with centers uniformly distributed between ( 2, 2) and (2, 2). The covariance matrix for each Gaussian component is chosen to P =.1347 I 2 2. Initially the weight of the first Gaussian component is w 1 =.5 and the rest of the 99 Gaussian components have equal weights, w =.51. The center and corresponding covariance matrices of the Gaussian components are propagated for 1 seconds using (4) and (41). Fig. 5(c) shows the plot of the approximated pdf without updating the weights of the Gaussian mixture and Fig. 5(f) shows the corresponding approximation error plot. As expected, the final pdf is not able to capture the non-gaussian behavior in this case. Figs. 5(d) and 5(e) show the plot of the approximated stationary pdf after updating the weights of different components of the Gaussian mixture model using the first and the second method respectively. Figs. 5(g) and 5(h) show the approximation error of the two methods. Their integral absolute errors (6), tabulated at the end of the section, shows that with the adaptation of the weights, the approximation errors have been reduced considerably. The first method has been applied recursively every 1 seconds to update the weights and the second method has been applied only once at the end of the simulation. Finally, we mention that the numerical results presented in this section reiterates our observation and support the conclusion that the improved performance of proposed algorithms can be attributed to the adaptation of amplitude corresponding to different components of Gaussian mixture. These dramatic advantages from five different problems provide compelling evidence for the merits of proposed algorithms. 2 of 24
21 (a) Initial pdf (b) True pdf (c) Final pdf (without Weight Update) (d) Final pdf (with Weight Update I) (e) Final pdf (with Weight Update II) (f) Error pdf (without Weight Update) (g) Error pdf (with Weight Update I) (h) Error pdf (with Weight Update II) Figure 5. Numerical Results Example 5 21 of 24
22 Table 1. Numerical approximation of the integral of the absolute error No Update Update I Update II Example 3 (avg. 2 runs) Example 4 (1 run) Example 5 (1 run) Conclusions Two update schemes for the forecast weights are presented in order to obtain a better Gaussian sum approximation to the forecast pdf. The difference between the two methods comes from the particularities of their derivations. The first method updates the weights such that they minimize the integral square difference between the true probability density function and its approximation. The derivation of the first method indicates that it is more appropriate for discrete time systems. When dealing with stochastic differential equations the expectations (3) are computed using Continuous Square-Root Unscented Transformation, 25 and the decision to update the weights recursively is made in order for the sigma points to capture the effect of the process noise on the evolution of the pdf. The second method is derived such that the Gaussian sum is satisfying the Fokker-Planck equation, which indicate that it is more appropriate for continuous time systems. The weights are updated such that they minimize the Fokker-Planck equation error. Both the method leads to two different convex quadratic minimization problems which are guaranteed to have a unique solution. Several benchmark problems have been provided to compare the two methods with the usual procedure of not updating the weights in pure propagation settings. In all of these diverse test problems, the proposed two algorithms are found to produce considerably smaller errors as compared to existing methods. The results presented in this paper serve to illustrate the usefulness of adaptation of weights corresponding to different components of Gaussian sum model. In filtering settings, where observations are available, the study of the impact of such an update scheme for Gaussian Sum Filtering is set as future work. Finally, we fully appreciate the truth that results from any test are difficult to extrapolate, however, testing the new algorithm on five benchmark problems does provide compelling evidence and a basis for optimism. 22 of 24
23 Acknowledgment This work was supported by the Defense Threat Reduction Agency (DTRA) under Contract No. W911NF- 6-C-162. The authors gratefully acknowledge the support and constructive suggestions of Dr. John Hannan of DTRA. References 1 Fuller, A. T., Analysis of nonlinear stochastic systems by means of the Fokker-Planck equation, International Journal of Control, Vol. 9, 1969, pp Risken, H., The Fokker-Planck Equation: Methods of Solution and Applications, Springer, Polidori, D. C. and Beck, J. L., Approximate Solutions for Nonlinear Vibration Problems, Prob. Engr. Mech., Vol. 11, 1996, pp Chodas, P. W. and Yeomans, D. K., Orbit Determination and Estimation of Impact Probability for Near Earth Objects, Proceedings of the Guidance and Control, Vol. 11 of Advances in the Astronautical Sciences, 1999, pp Wang, M. C. and Uhlenbeck, G., On the Theory of Brownian Motion II, Reviews of Modern Physics, Vol. 17, No. 2-3, 1945, pp Chakravorty, S., Kumar, M., and Singla, P., A Quasi-Gaussian Kalman Filter, 26 American Control Conference, Minneapolis, MN. 7 Kumar, M., Singla, P., Chakravorty, S., and Junkins, J. L., A Multi-Resolution Approach for Steady State Uncertainty Determination in Nonlinear Dynamical Systems, 38th Southeastern Symposium on System Theory, Kumar, M., Singla, P., Chakravorty, S., and Junkins, J. L., The Partition of Unity Finite Element Approach to the Stationary Fokker-Planck Equation, 26 AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Keystone, CO, Aug , Muscolino, G., Ricciardi, G., and Vasta, M., Stationary and non-stationary probability density function for non-linear oscillators, International Journal of Non-Linear Mechanics, Vol. 32, 1997, pp Paola, M. D. and Sofi, A., Approximate solution of the Fokker-Planck-Kolmogorov equation, Probabilistic Engineering Mechanics, Vol. 17, 22, pp Doucet, A., de Freitas, N., and Gordon, N., Sequential Monte-Carlo Methods in Practice, Springer-Verlag, April Iyengar, R. N. and Dash, P. K., Study of the random vibration of nonlinear systems by the Gaussian closure technique, Journal of Applied Mechanics, Vol. 45, 1978, pp Roberts, J. B. and Spanos, P. D., Random Vibration and Statistical Linearization, Wiley, Lefebvre, T., Bruyninckx, H., and Schutter, J. D., Kalman Filters of Non-Linear Systems: A comparison of Performance, International journal of Control, Vol. 77, No. 7, 24, pp Lefebvre, T., Bruyninckx, H., and Schutter, J. D., Comment on A New Method for the Nonlinear Transformations of Means and Covariances in Filters and Estimators, IEEE Transactions on Automatic Control, Vol. 47, No. 8, Alspach, D.; Sorenson, H., Nonlinear Bayesian estimation using Gaussian sum approximations, Automatic Control, IEEE Transactions on, Vol. 17, No. 4, 1972, pp Ito, K.; Xiong, K., Gaussian filters for nonlinear filtering problems, Automatic Control, IEEE Transactions on, Vol. 45, No. 5, 2, pp of 24
24 18 Cvitanović, P., Artuso, R., Mainieri, R., Tanner, G., and Vattay, G., Chaos: Classical and Quantum, Niels Bohr Institute, Copenhagen, Gubner, J. A., Probability and Random Processes for Electrical and Computer Engineers, Cambridge University Press, Julier, S. and Uhlmann, J., Unscented Filtering and Nonlinear Estimation, Proceedings of the IEEE, Vol. 92, No. 3, 24, pp Honkela, A., Approximating nonlinear transformations of probability distributions for nonlinear independent component analysis, IEEE International Joint Conference on Neural Networks, Wu, Y., Wu, M., Hu, D., and Hu, X., An Improvement to Unscented Transformation, 17th Australian Joint Conference on Artificial Intelligence, Boyd, S. and Vandenberghe, L., Convex Optimization, Cambridge University Press, Singla, P. and Singh, T., A Gaussian function Network for Uncertainty Propagation through Nonlinear Dynamical System, 18th AAS/AIAA Spaceflight Mechanics Meeting, Galveston, TX, Jan , Sarkka, S., On Unscented Kalman Filtering for State Estimation of Continuous-Time Nonlinear Systems, Automatic Control, IEEE Transactions on, Vol. 52, No. 9, 27, pp of 24
A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem
A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem Gabriel Terejanu a Puneet Singla b Tarunraj Singh b Peter D. Scott a Graduate Student Assistant Professor Professor
More informationAN ADAPTIVE GAUSSIAN SUM FILTER FOR THE SPACECRAFT ATTITUDE ESTIMATION PROBLEM
AAS 8-262 AN ADAPTIVE GAUSSIAN SUM FILTER FOR THE SPACECRAFT ATTITUDE ESTIMATION PROBLEM Gabriel Terejanu Jemin George and Puneet Singla The present paper is concerned with improving the attitude estimation
More informationDecision Based Uncertainty Propagation Using Adaptive Gaussian Mixtures
Decision Based Uncertainty Propagation Using Adaptive Gaussian Mitures Gabriel Terejanu a Puneet Singla b Tarunraj Singh b Peter D. Scott a terejanu@buffalo.edu psingla@buffalo.edu tsingh@buffalo.edu peter@buffalo.edu
More informationAdaptive Gaussian Sum Filter for Nonlinear Bayesian Estimation
IEEE TRANSACTIONS ON AUTOMATIC CONTROL Adaptive Gaussian Sum Filter for Nonlinear Bayesian Estimation Gabriel Terejanu, Puneet Singla, Tarunraj Singh, Peter D. Scott Abstract A nonlinear filter is developed
More informationAdaptive Gaussian Sum Filter for Nonlinear Bayesian Estimation
IEEE TRANSACTIONS ON AUTOMATIC CONTROL Adaptive Gaussian Sum Filter for Nonlinear Bayesian Estimation Gabriel Terejanu, Puneet Singla, Tarunraj Singh, Peter D. Scott Abstract A nonlinear filter is developed
More informationD (1) i + x i. i=1. j=1
A SEMIANALYTIC MESHLESS APPROACH TO THE TRANSIENT FOKKER-PLANCK EQUATION Mrinal Kumar, Suman Chakravorty, and John L. Junkins Texas A&M University, College Station, TX 77843 mrinal@neo.tamu.edu, schakrav@aeromail.tamu.edu,
More informationEntropy-Based Space Object Data Association Using an Adaptive Gaussian Sum Filter
Entropy-Based Space Object Data Association Using an Adaptive Gaussian Sum Filter Daniel R. Giza, Puneet Singla, John L. Crassidis, Richard Linares, Paul J. Cefola University at Buffalo, State University
More informationThe Partition of Unity Finite Element Approach to the Stationary Fokker-Planck Equation
The Partition of Unity Finite Element Approach to the Stationary Fokker-Planck Equation Mrinal Kumar, Puneet Singla, Suman Chakravorty and John L. Junkins Texas A&M University, College Station, TX, 77840,
More informationPROPAGATION OF PDF BY SOLVING THE KOLMOGOROV EQUATION
PROPAGATION OF PDF BY SOLVING THE KOLMOGOROV EQUATION Puneet Singla Department of Mechanical & Aerospace Engineering University at Buffalo, Buffalo, NY-1460 Workshop: New Advances in Uncertainty Analysis
More informationCertain Thoughts on Uncertainty Analysis for Dynamical Systems
Certain Thoughts on Uncertainty Analysis for Dynamical Systems!"#$$%&'(#)*+&!"#$%"&$%'()$%(*+&$,$-$+.(",/(0,&122341,&(5$#$67(8'.&1-.(!"#$%%&'()*+,-.+/01'&2+,304 5,#')67,-642849,:!'-(:'&4;4
More informationNew Advances in Uncertainty Analysis and Estimation
New Advances in Uncertainty Analysis and Estimation Overview: Both sensor observation data and mathematical models are used to assist in the understanding of physical dynamic systems. However, observational
More informationExtension of the Sparse Grid Quadrature Filter
Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang
More informationTitle: A Homotopic Approach to Domain Determination and Solution Refinement for the Stationary Fokker- Planck Equation
Elsevier Editorial System(tm) for Probabilistic Engineering Mechanics Manuscript Draft Manuscript Number: Title: A Homotopic Approach to Domain Determination and Solution Refinement for the Stationary
More informationLecture 6: Bayesian Inference in SDE Models
Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs
More informationin a Rao-Blackwellised Unscented Kalman Filter
A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More informationA Nonlinear Filter based on Fokker Planck Equation and MCMC Measurement Updates
A Nonlinear Filter based on Fokker Planck Equation and MCMC Measurement Updates Mrinal Kumar* and Suman Chakravorty** Abstract This paper presents a nonlinear filter based on the Fokker-Planck equation
More informationConstrained State Estimation Using the Unscented Kalman Filter
16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and
More informationROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1
ROUST CONSTRINED ESTIMTION VI UNSCENTED TRNSFORMTION Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a a Department of Chemical Engineering, Clarkson University, Potsdam, NY -3699, US.
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationLecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations
Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model
More informationUncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos
Noname manuscript No. (will be inserted by the editor) Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos Umamaheswara Konda Puneet Singla Tarunraj Singh Peter Scott Received: date
More informationLagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC
Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background
More informationRESEARCH ARTICLE. Online quantization in nonlinear filtering
Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form
More informationCERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS
CERTAIN THOUGHTS ON UNCERTAINTY ANALYSIS FOR DYNAMICAL SYSTEMS Puneet Singla Assistant Professor Department of Mechanical & Aerospace Engineering University at Buffalo, Buffalo, NY-1426 Probabilistic Analysis
More informationThe Scaled Unscented Transformation
The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented
More informationCrib Sheet : Linear Kalman Smoothing
Crib Sheet : Linear Kalman Smoothing Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Introduction Smoothing can be separated
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationDensity Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering
Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationLecture 7: Optimal Smoothing
Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother
More informationGaussian Filters for Nonlinear Filtering Problems
910 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Gaussian Filters for Nonlinear Filtering Problems Kazufumi Ito Kaiqi Xiong Abstract In this paper we develop analyze real-time accurate
More informationExpectation Propagation in Dynamical Systems
Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex
More informationState Space Representation of Gaussian Processes
State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)
More informationSolution of Fokker Planck equation by finite element and finite difference methods for nonlinear systems
Sādhanā Vol. 31, Part 4, August 2006, pp. 445 461. Printed in India Solution of Fokker Planck equation by finite element and finite difference methods for nonlinear systems PANKAJ KUMAR and S NARAYANAN
More informationA new unscented Kalman filter with higher order moment-matching
A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This
More informationTime Series Prediction by Kalman Smoother with Cross-Validated Noise Density
Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract
More informationNonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator
Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator Andreas Rauh, Kai Briechle, and Uwe D Hanebec Member, IEEE Abstract In this paper, the Prior Density Splitting Mixture
More informationSafety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design
Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design Haoyu Wang * and Nam H. Kim University of Florida, Gainesville, FL 32611 Yoon-Jun Kim Caterpillar Inc., Peoria, IL 61656
More informationPolynomial Chaos Based Design of Robust Input Shapers
Tarunraj Singh Professor e-mail: tsingh@buffalo.edu Puneet Singla Assistant Professor e-mail: psingla@buffalo.edu Umamaheswara Konda e-mail: venkatar@buffalo.edu Department of Mechanical and Aerospace
More informationThe Unscented Particle Filter
The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian
More information(Extended) Kalman Filter
(Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationSLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada
SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM
More informationNONLINEAR SEQUENTIAL METHODS FOR IMPACT PROBABILITY ESTIMATION
(Preprint) AAS NONLINEAR SEQUENTIAL METHODS FOR IMPACT PROBABILITY ESTIMATION Richard Linares, Puneet Singla, and John L. Crassidis INTRODUCTION Orbit determination in application to the estimation of
More informationSQUARE-ROOT CUBATURE-QUADRATURE KALMAN FILTER
Asian Journal of Control, Vol. 6, No. 2, pp. 67 622, March 204 Published online 8 April 203 in Wiley Online Library (wileyonlinelibrary.com) DOI: 0.002/asjc.704 SQUARE-ROO CUBAURE-QUADRAURE KALMAN FILER
More informationEfficient Minimax Control Design for Prescribed Parameter Uncertainty
JOURNAL OF GUIDANCE, CONTROL, AND DYNAMICS Vol. 27, No. 6, November December 2004 Efficient Minimax Control Design for Prescribed Parameter Uncertainty Dirk Tenne and Tarunraj Singh State University of
More informationNonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu
Nonlinear Filtering With Polynomial Chaos Raktim Bhattacharya Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering with PC Problem Setup. Dynamics: ẋ = f(x, ) Sensor Model: ỹ = h(x)
More informationA STRATEGY FOR IDENTIFICATION OF BUILDING STRUCTURES UNDER BASE EXCITATIONS
A STRATEGY FOR IDENTIFICATION OF BUILDING STRUCTURES UNDER BASE EXCITATIONS G. Amato and L. Cavaleri PhD Student, Dipartimento di Ingegneria Strutturale e Geotecnica,University of Palermo, Italy. Professor,
More informationDensity Propagation for Continuous Temporal Chains Generative and Discriminative Models
$ Technical Report, University of Toronto, CSRG-501, October 2004 Density Propagation for Continuous Temporal Chains Generative and Discriminative Models Cristian Sminchisescu and Allan Jepson Department
More informationFUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS
FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,
More informationEVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER
EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department
More informationDual Estimation and the Unscented Transformation
Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology
More informationPrediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother
Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications
More informationPath integrals for classical Markov processes
Path integrals for classical Markov processes Hugo Touchette National Institute for Theoretical Physics (NITheP) Stellenbosch, South Africa Chris Engelbrecht Summer School on Non-Linear Phenomena in Field
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationNew Physical Principle for Monte-Carlo simulations
EJTP 6, No. 21 (2009) 9 20 Electronic Journal of Theoretical Physics New Physical Principle for Monte-Carlo simulations Michail Zak Jet Propulsion Laboratory California Institute of Technology, Advance
More informationA Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization
A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline
More informationA NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS
AAS 6-135 A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS Andrew J. Sinclair,JohnE.Hurtado, and John L. Junkins The concept of nonlinearity measures for dynamical systems is extended to estimation systems,
More informationDirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance
Dirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance Oliver C Schrempf, Dietrich Brunn, and Uwe D Hanebeck Abstract This paper proposes a systematic procedure
More informationNonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets
Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,
More informationRAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS
RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca
More informationExpectation Propagation Algorithm
Expectation Propagation Algorithm 1 Shuang Wang School of Electrical and Computer Engineering University of Oklahoma, Tulsa, OK, 74135 Email: {shuangwang}@ou.edu This note contains three parts. First,
More informationA NEW NONLINEAR FILTER
COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed
More information5 Applying the Fokker-Planck equation
5 Applying the Fokker-Planck equation We begin with one-dimensional examples, keeping g = constant. Recall: the FPE for the Langevin equation with η(t 1 )η(t ) = κδ(t 1 t ) is = f(x) + g(x)η(t) t = x [f(x)p
More informationStochastic Analogues to Deterministic Optimizers
Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured
More informationIMPLICIT RUNGE-KUTTA METHODS FOR UNCERTAINTY PROPAGATION
IMPLICIT RUNGE-KUTTA METHODS FOR UNCERTAINTY PROPAGATION Jeffrey M. Aristoff, Joshua T. Horwood, and Aubrey B. Poore Numerica Corporation, 485 Hahns Peak Drive, Suite, Loveland CO, 8538 ABSTRACT Accurate
More informationExtended Kalman Filter Tutorial
Extended Kalman Filter Tutorial Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Dynamic process Consider the following
More informationOn Continuous-Discrete Cubature Kalman Filtering
On Continuous-Discrete Cubature Kalman Filtering Simo Särkkä Arno Solin Aalto University, P.O. Box 12200. FI-00076 AALTO, Finland. (Tel: +358 50 512 4393; e-mail: simo.sarkka@aalto.fi) Aalto University,
More informationSelf Adaptive Particle Filter
Self Adaptive Particle Filter Alvaro Soto Pontificia Universidad Catolica de Chile Department of Computer Science Vicuna Mackenna 4860 (143), Santiago 22, Chile asoto@ing.puc.cl Abstract The particle filter
More informationUNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION
AAS-04-115 UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION Matthew C. VanDyke, Jana L. Schwartz, Christopher D. Hall An Unscented Kalman Filter (UKF) is derived in an
More informationKyle DeMars (1), Moriba Jah (2), Dan Giza (3), Tom Kelecy (4) , ABSTRACT
ORBIT DETERMINATION PERFORMANCE IMPROVEMENTS FOR HIGH AREA-TO-MASS RATIO SPACE OBJECT TRACKING USING AN ADAPTIVE GAUSSIAN MIXTURES ESTIMATION ALGORITHM Kyle DeMars (1), Moriba Jah (2), Dan Giza (3), Tom
More informationData Assimilation for Dispersion Models
Data Assimilation for Dispersion Models K. V. Umamaheswara Reddy Dept. of Mechanical and Aerospace Engg. State University of New Yor at Buffalo Buffalo, NY, U.S.A. venatar@buffalo.edu Yang Cheng Dept.
More informationIntroduction to Unscented Kalman Filter
Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics
More information16. Working with the Langevin and Fokker-Planck equations
16. Working with the Langevin and Fokker-Planck equations In the preceding Lecture, we have shown that given a Langevin equation (LE), it is possible to write down an equivalent Fokker-Planck equation
More informationNONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION
NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION Kyle T. Alfriend and Deok-Jin Lee Texas A&M University, College Station, TX, 77843-3141 This paper provides efficient filtering algorithms
More informationNonlinear State Estimation! Particle, Sigma-Points Filters!
Nonlinear State Estimation! Particle, Sigma-Points Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Particle filter!! Sigma-Points Unscented Kalman ) filter!!
More informationMINIMAX DESIGN OF PREFILTERS FOR MANEUVERING FLEXIBLE STRUCTURES
MINIMAX DESIGN OF PREFILTERS FOR MANEUVERING FLEXIBLE STRUCTURES Tarunraj Singh Yong-Lin Kuo Department of Mechanical and Aerospace Engineering SUNY at Buffalo, Buffalo, New York 426 ABSTRACT This paper
More informationChris Bishop s PRML Ch. 8: Graphical Models
Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular
More informationAnalytical Non-Linear Uncertainty Propagation: Theory And Implementation
Analytical Non-Linear Uncertainty Propagation: Theory And Implementation Kohei Fujimoto 1 Texas A&M University, College Station, TX, 77843 Daniel J. Scheeres 2 The University of Colorado-Boulder, Boulder,
More informationKalman Filters with Uncompensated Biases
Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile
More informationSmoothers: Types and Benchmarks
Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More informationState Estimation using Moving Horizon Estimation and Particle Filtering
State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /
More informationEfficient Monitoring for Planetary Rovers
International Symposium on Artificial Intelligence and Robotics in Space (isairas), May, 2003 Efficient Monitoring for Planetary Rovers Vandi Verma vandi@ri.cmu.edu Geoff Gordon ggordon@cs.cmu.edu Carnegie
More informationAnalytical Non-Linear Uncertainty Propagation: Theory And Implementation
Analytical Non-Linear Uncertainty Propagation: Theory And Implementation Kohei Fujimoto 1), Daniel J. Scheeres 2), and K. Terry Alfriend 1) May 13, 2013 1) Texas A&M University 2) The University of Colorado
More information6.4 Kalman Filter Equations
6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:
More informationFundamentals of Data Assimilation
National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)
More informationPart 1: Expectation Propagation
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud
More informationPreviously Monte Carlo Integration
Previously Simulation, sampling Monte Carlo Simulations Inverse cdf method Rejection sampling Today: sampling cont., Bayesian inference via sampling Eigenvalues and Eigenvectors Markov processes, PageRank
More informationRandom Vibrations & Failure Analysis Sayan Gupta Indian Institute of Technology Madras
Random Vibrations & Failure Analysis Sayan Gupta Indian Institute of Technology Madras Lecture 1: Introduction Course Objectives: The focus of this course is on gaining understanding on how to make an
More informationNEW ALGORITHMS FOR UNCERTAINTY QUANTIFICATION AND NONLINEAR ESTIMATION OF STOCHASTIC DYNAMICAL SYSTEMS. A Dissertation PARIKSHIT DUTTA
NEW ALGORITHMS FOR UNCERTAINTY QUANTIFICATION AND NONLINEAR ESTIMATION OF STOCHASTIC DYNAMICAL SYSTEMS A Dissertation by PARIKSHIT DUTTA Submitted to the Office of Graduate Studies of Texas A&M University
More informationUsing the Kalman Filter to Estimate the State of a Maneuvering Aircraft
1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented
More informationHuman Pose Tracking I: Basics. David Fleet University of Toronto
Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated
More informationSIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS
SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS Miroslav Šimandl, Jindřich Duní Department of Cybernetics and Research Centre: Data - Algorithms - Decision University of West
More informationBayesian Inverse problem, Data assimilation and Localization
Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More information