Gaussian Filters for Nonlinear Filtering Problems

Size: px
Start display at page:

Download "Gaussian Filters for Nonlinear Filtering Problems"

Transcription

1 910 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Gaussian Filters for Nonlinear Filtering Problems Kazufumi Ito Kaiqi Xiong Abstract In this paper we develop analyze real-time accurate filters for nonlinear filtering problems based on the Gaussian distributions. We present the systematic formulation of Gaussian filters develop efficient accurate numerical integration of the optimal filter. We also discuss the mixed Gaussian filters in which the conditional probability density is approximated by the sum of Gaussian distributions. A new update rule of weights for Gaussian sum filters is proposed. Our numerical testings demonstrate that new filters significantly improve the extended Kalman filter with no additional cost the new Gaussian sum filter has a nearly optimal performance. Index Terms Author: please supply index terms. keywords@ieee.org for information. I. INTRODUCTION THE NONLINEAR filtering problem consists of estimating the state of a nonlinear stochastic system from noisy observation data. The problem has been the subject of considerable research interest during the past several decades because it has many significant applications in science engineering such as navigational guidance systems, radar tracking, sonar ranging, satellite airplane orbit determination [11], [14], [15]. As is well-known, the most widely used filter is the extended Kalman filter for nonlinear filtering problems. It is derived from the Kalman filter based on the successive linearization of the signal process the observation map. The extended Kalman filter has been successfully applied to numerous nonlinear filtering problems. If nonlinearities are significant, however, its performance can be substantially improved. Such efforts have also been reported in [1], [5], [7], [12], [13], [20]. In this paper our objective is to develop analyze real-time accurate filters for nonlinear filtering algorithms based on Gaussian distributions. We present the systematic formulation of Gaussian filters mixed Gaussian filters. We first develop the Gaussian filter. The proposed filter is based on the steps: 1) we assume the conditional probability density to be a Gaussian distribution (i.e., assumed density) 2) we obtain the Gaussian filter by equating the Bayesian formula with respect to the first moment (mean) the second moment (covariance). Our approach is based on the efficient numerical integration of the Bayesian formula for optimal recursive filtering. The direct evaluation of the Jacobian matrix associated with the extended Kalman filter is avoided, which is similar to the one recently reported in [12]. Secondly, we use Gaussian sum filters for the development of nearly optimal filters. The Gaussian sum filter has been studied in [1] [20]. However, we adapt our Gaussian filter for the update of Gaussian distributions. We also suggest some new update rules of weights for Gaussian sum filters. Through our experimental study we found that the filters developed in the paper perform better than or as good as the filter of Julier Uhlmann [12]. They have a significant improvement over the extended Kalman filter with no additional cost. An outline of the paper is as follows. In Section II we develop the Gaussian filter based on a single Gaussian distribution. In Section III we discuss the efficient numerical integration of the Gaussian filter based on quadrature rules, introduce the Gauss Hermite filter (GHF) the central difference filter (CDF). In Section IV we formulate the Gauss Hermite filter the central difference filter as the filter algorithms. In Section V we introduce the mixed Gaussian filter the new update rules of weights. In Section VI we discuss the relation of the Gaussian filter for the discrete time system to the continuous-time optimal filter governed by the Zakai equation. In Section VII we analyze the stability performance bound of the Gaussian filters the mixed Gaussian filters developed in the paper. In Section VIII we report our numerical findings comparison studies. Also, we demonstrate a nearly optimal performance of the mixed Gaussian filter. Finally, we conclude our results in Section IX. II. GAUSSIAN FILTERS We discuss the nonlinear filtering problem for the discrete-time signal system for -valued process (2.1) the observation process is given by (2.2) where are white noises with covariances respectively. We assume that the initial condition are independent rom variables. The optimal nonlinear filtering problem is to find the conditional expectation of the process given the observation data The probability density function of the conditional expectation is given by Bayes formula Manuscript received November 7, 1997; revised November 30, 1998 June 16, Recommended by Associate Editor, T. E. Duncan. This work was supported in part by the Office of Naval Research under Grant N MURI-AFOSR under Grant F The authors are with the Center for Research in Scientific Computation, North Carolina State University, Raleigh, NC USA ( kito@eos.ncsu.edu; xiong@eos.ncsu.edu). Publisher Item Identifier S (00) (2.3) (2.4) /00$ IEEE

2 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 911 where is the one-step prediction is the probability density function of conditioned on That is, the recursive filter (2.3), (2.4) consists of the prediction step (2.3) the correction step (2.4). We consider a Gaussian approximation of this recursive formula develop Gaussian filters. We assume that is a single Gaussian distribution with mean covariance Then we construct the Gaussian approximation of in the following two steps. First, we consider the predictor step. We approximate by the Gaussian distribution that has the same mean covariance as By Fubini s theorem the mean covariance of are given by Next, we discuss the corrector step (2.6) where is the normalization constant we assume that is given by the Gaussian approximation defined by (2.5) (2.6). Define the innovation process Here we again approximate for the conditional distribution of given the observations up to time by a Gaussian. That is, we approximate by its Gaussian approximation This means that the probability density function of is given by the Gaussian distribution with mean covariance defined by (2.7) Remark 1.1: To derive the mean covariance of we also can use (2.1) the independence of For example which is precisely the same as the above, though differently expressed. Thus, if is a Gaussian with mean covariance then the Gaussian approximation of has mean covariance defined by Finally, we construct the Gaussian approximation of mean covariance defined by where the Kalman-filter gain is defined by the covariance is defined by (2.8) with (2.9) (2.10) (2.11) (2.5) (2.12)

3 912 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 In order to implement the Gaussian filter we must develop the approximation methods to evaluate integrals (2.5) (2.12). In Section III we discuss the approximation methods for the integration of the form If we assume by change the coordinate of integration then (3.2) where is a given function. Our discussions include the Gauss Hermite quadrature rule finite difference approximation. Note that a different approximation to this integration results in a variant of the Gaussian filter. For example, if we approximate the integration by linearizing at then we obtain the extended Kalman filter. In order to improve the performance of Gaussian filter we also discuss the mixed Gaussian filter in Section V. We approximate by the linear combination of multiple Gaussian distributions, i.e., with We apply the Gauss Hermite quadrature rule. The Gauss Hermite quadrature rule is given by where the equality holds for all polynomials of degree up to the quadrature points the weights are determined (e.g., see [9]) as follows. Let be the symmetric tridiagonal matrix with zero diagonals Then are the eigenvalues of equal to where is the first element of the th normalized eigenvector of Thus, is approximated by (3.3) where, is exact for all polynomials of the form with In order to evaluate we need -point function evaluations. For example we have Then, each Gaussian distribution is updated separately by the Gaussian filter (2.5) (2.12). In the corrector step we update the weights For example, we can determine the weights by the -projection, i.e., minimize requires nine-point function evaluations for In Julier Uhlmann [12] the stard normal distribution is approximated by the discrete distribution as over where is defined as in the corrector step (2.4), assuming where is a constant is th unit vector in Here, this discrete distribution has the same first, second, higher odd moments as the stard normal distribution. Julier Uhlmann developed a Gaussian filter based on the following quadrature rule for (3.1): (3.4) III. QUADRATURE RULES In this section we discuss the approximation methods for the integral of the form The JU-rule requires -point function evaluation is exact for all quadratic polynomials. If we set then the JU-rule coincides with the Gauss Hermite rule Finally we consider the polynomial interpolation methods. We approximate by the quadratic function that satisfies (3.1)

4 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 913 at the points in consisting of where is the stepsize. That is, is given by (3.5) where is the th coordinate of a point the vector in the symmetric matrix are defined by (3.10) If we remove the second-order correction term in (3.9) (3.10), then this coincides with the extended Kalman filter with the central difference approximation of the Jacobian of IV. FILTER ALGORITHMS In this section we describe the filter algorithms that are based on the Gaussian filter quadrature methods described in Section III. We first consider the Gauss Hermite filter based on the Gaussian Hermite quadrature rule. Let be the quadrature rule for Thus, we approximate (3.1) by the integral by (3.6) (4.1) Let be the starting values for the mean covariance of the rom variable set In Gaussian Hermite filter we apply the following predictor corrector steps recursively. A. Gauss Hermite Filter Predictor Step: Compute the factorization set Update by where is the quadratic approximation of for Next, we consider the approximation of defined by (3.7) Corrector Step: Compute the factorization set Update by where (3.8) which only bases on the values uses the diagonal second order correction of the central difference approximation of in (3.5). Then the integrals are approximated by (3.9) We use the Choresky decomposition for finding [6] in our calculations. One of the advantages of the quadrature

5 914 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 based filter is that we are not required to have the derivative of Secondly, we present the filter algorithm based on the central difference approximation with second order diagonal correction. We summarize the filter algorithm of the central difference approximation with the second-order diagonal correction as follows. B. Central Difference Filter Predictor Step: Compute the factorization set approximated by Update by where are the central difference approximations of Corrector Step: Compute the factorization set approximated by Update by V. MIXED GAUSSIAN FILTER In this section we discuss the mixed Gaussian filter. We approximate the conditional probability density by the linear combination of multiple Gaussian distributions, i.e., (5.1) Here we apply the Gaussian filter (2.5) (2.12) to each Gaussian distribution obtain the update Each update is independent from the others can be performed in a parallel manner. Next, we update the weights for the new update at the end of corrector step. We discuss three update formulas in what follows. First, by equating the zero moment of each Gaussian distribution we obtain where where are the central difference approximations of Here we approximate the right-h side by the Gaussian distribution as in (2.5) (2.6) obtain In the algorithm we avoid calculating the derivatives of Instead, we use the central difference. The following are the operation counts in terms of function evaluations. If we use the quadrature rule based on the -point Gauss Hermite rule then the algorithm requires function evaluations. If we use the Julier Uhlmann discrete Gaussian rule then the algorithm requires function evaluations. If we use the central difference algorithm, the function evaluations are required. As we will see, our numerical testings indicate that the algorithm based on the Gauss Hermite rule performs better than the others. If we use the three-point rule, then we require 729 function evaluations for the case which can be done in real-time. (5.2) where are defined by (2.7) (2.8), which is the update formula discussed in [1]. Next, we apply the collocation condition at

6 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 915 to obtain the update where is chosen so that the singularity of the matrix is avoided the matrices are defined by (5.3) Finally we discuss the simultaneous update of the weights. We determine the weights by the -projection, i.e., minimize (5.4) over where is defined as in the corrector step (2.4) with Thus, we solve (5.6) to obtain the weights at each corrector step by using the existing numerical optimization method (e.g., see [6]). The theoretical foundation of the Gaussian sum approximation as above is that any probability density function can be approximated as closely as desired by a Gaussian sum. More precisely, we state the following error estimate, the proof of which is found in [17]. Theorem 5.1: Let be a nonnegative integer For any there must exist a mask such that for any density function all the estimate In order to perform the minimization (5.4) we need to evaluate the integral of the form is a linear combination of Gaussian distri- holds, where butions given by (5.7) (5.8) it is relatively expensive. Hence we propose the minimization of the sum of collocation distances with is a constant independent of Here are defined by (5.5) over satisfying A positive constant is chosen so that the likelihood of each Gaussian distribution is nonzero [e.g., ]. Problem (5.5) is formulated as the quadratic programming subject to (5.6) respectively. Roughly speaking, the estimate (5.7) shows that any probability density function can be

7 916 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 approximated by a sum of Gaussian distributions each of whose components is given by (6.3). We approximate the process by the discrete-time process by with order for (5.9) with (6.6) VI. RELATION TO CONTINUOUS-TIME FILTER In this section let us discuss the nonlinear filtering problem for the continuous-time signal process in generated by (6.1) where is the stard Brownian motion, i.e., is the diffusion process [3], [11], [19]. So, (6.1) holds in the sense of Ito satisfies Here the interval is subdivided into the subintervals where So, denotes the approximation of the process at denotes the discretization of the ordinary differential equation for all Now we consider the continuous observation process We employ a family of Runge Kutta methods in our calculation, including the Euler approximation (6.2) where is the stard Brownian process that is independent of Or, we consider the discrete observation process (6.3) where is the stepsize is white noise with covariance We assume that the initial condition are independent. Then the conditional probability density satisfies the Zakai equation (6.4) where is the Fokker Planck operator [3], [19]. As shown in [8], [10], [15], the discrete-time filter (2.3), (2.4) applied to the time-discretized signal system of (6.1), (6.2) The predictor step (2.5), (2.6) is applied successively to obtain i.e., times (6.7) (6.5) provides an approximation method for the continuous-time optimal filter to (6.1), (6.2). Note that denotes the finite difference approximation with (6.8) for the ordinary differential equation The proposed Gaussian filter does not converge to the optimal filter (6.4) as unless is Gaussian. However, it converges to a Gaussian filter, which approximates (6.4) will be discussed elsewhere. Next, we discuss the discrete-time filter for the continuous-time process (6.1) with the discrete-time observation for Hence, the mean covariance of are given by (6.9) Then the corrector step (2.7) (2.12) is applied as it is to obtain VII. STABILITY ANALYSIS OF FILTER ALGORITHMS In this section we analyze the stability performance bound of the Gaussian filters (2.5) (2.12) for the signal process

8 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 917 (6.5) with i.e., a discrete approximation of the continuous signal system (6.1) can be given by by assumption (7.1). That is (7.5) where is the white noise with covariance We assume that for (7.1) Then we have the following theorem. The theorem shows stability of the proposed Gaussian filter independent of the fineness of on a fixed bounded time interval. Theorem 7.1: The solution to (2.5) (2.12) has the estimate Note that for any the covariance is nonegative satisfies the following property: Thus by (7.5) we obtain the estimate (7.2). Remark 7.1: Using the similar arguments that lead to (7.2), we can prove that the same result holds for the approximation method discussed in Sections III IV. That is, the estimate (7.2) is still valid for the Gaussian Hermite filter, provided that Next, we discuss the stability of the mixed Gaussian filter. The mixed Gaussian filter assumes the form (7.6) Note that (7.2) Proof: From (2.5), (2.6) the Gaussian approximation of has the mean covariance defined by (7.3) (7.4) where the th Gaussian distribution is computed by the Gaussian filters described in Sections III IV in a parallel manner. The weights are determined at the end of the corrector step based on the likelihood of each Gaussian distribution. Under the assumption (7.1) each Gaussian filter in the Gaussian sum (7.6) is stable in the sense of (7.2). Next we discuss the performance bound. In general, let the operators denote the one-step update of conditional probability density functions that are based on the exact filter (2.3), (2.4) the mixed Gaussian filter discussed in Section V, respectively. We estimate the difference between the exact filter the mixed Gaussian filter Thus by the Cauchy Schwarz inequality we have is the probability density function of rom vari- where able (7.7) where Hence it follows from (7.4) that is the mixed Gaussian approximation of Let Then we have the following result. Theorem 7.2: Assume that for any probability density function Then the error estimate (7.8) (7.9) holds, where is the initial error defined by

9 918 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 the error Proof: Note that is given by (7.10) In this section we demonstrate the feasibility of our proposed filter algorithms, Gauss Hermite filter (GHF), central difference filter (CDF), using the three test examples as in Examples We compare the performance of our proposed filters against that of the extended Kalman filter (EKF) the filter of Julier Uhlmann (JUF) [12]. We use the average root mean square error for our comparison of the methods. The average root mean square error is defined by (8.1) Thus we have which leads to (7.9). Remark 7.2: Suppose is a mixed Gaussian distribution we set Also assume that are affine functions. Then we can show that i.e., the mixed Gaussian filter is exact, since the Gaussian filter is exact for the Gaussian distribution in this case. To underst the validity usage of error formula (7.9) we adopt the following filter (which is different from the one we implemented), i.e., the mixed Gaussian filter with fixed means covariances. For simplicity of our discussion we assume Let us consider the fixed Gaussian elements with nodal points covariance Then we define the subspace of the probability distributions on by Define is the projection of onto We consider the mixed Gaussian filter means covariances, i.e., where with fixed Thus is the fixed operator in this case. Hence can be estimated based on the error estimate formula (7.9), assuming the smoothness of VIII. NUMERICAL RESULTS AND COMPARISONS which is based on different simulation runs. Here subscript denotes the th component of the vector its corresponding estimate the superscript denotes the th simulation run. Note that for a one-dimensional system the filter of Julier Uhlmann [12] coincides with the three-point Gauss Hermite filter when Also when the stepsize is the central difference filter has the same estimate algorithm as the filter of Julier Uhlmann except for the estimation Hence, in this section the stepsize is always chosen as for our study. We also note that our proposed filter algorithms maintain the nonnegative definiteness of the covariance update. We carried out our comparison study using various examples different starting states different problem data, but we only present the selected results in what follows. Example 8.1: We consider the one-dimensional signal process the observation process where (8.2) (8.3) It is motivated from the continuous time systems (6.1) (6.2) of the form (8.4) as described in Section VI. This example is used as a benchmark because of the following reasons. The deterministic equation (8.5) has two stable equilibria, 1 1, one unstable equilibrium, 0. So, we observed that the signal process is distributed around one of stable equilibria. The observation function is the square of shifted distance from the origin distinguishes the two stable equilibria marginally. The performance of the filter is possibly tested because of the minor margin of detecting measurement function of the signal process. We use the following problem data: the time elapse the initial condition i.e., the starting point the system

10 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 919 Fig. 1. The average of root mean square error. parameters We consider the time from 0to4. is chosen as the best value, i.e., (see [12]). Fig. 1 shows the average root mean square errors committed by each filter across a simulation consisting of 50 runs. From Fig. 1 we see that the five-point Gauss Hermite filter performs better than others. The central difference filter has similar performance like the filter of Julier Uhlmann [12] in this example. All the three filters perform superior to the extended Kalman filter. Example 8.2: In this example, we consider the Lorenz system where is the -valued process. The -valued processes are white noises with the same covariance Here is a constant vector is a constant scalar. Also (8.6) The observation function is chosen as the shifted distance from the origin given by It is motivated from the Lorenz stochastic differential system of the form (8.4) as described in Section VI where The three-dimensional deterministic equations the system. Here, we choose which is a mathematically very interesting case, because in the case there are three unstable equilibria a strange attractor for the equations. We use the following problem data: the system parameters are chosen as the initial condition is i.e., We consider the time from 0 to 4. The initial estimate is with covariance In Figs. 2 4 we show the average of root mean square errors for each component of system state committed by the algorithms of the Gauss Hermite quadrature rule, the central difference approximation, the JU-rule, respectively, across a simulation consisting of 50 runs. As shown in Figs. 2 4, the GHF the CDF have smaller errors than the JUF the EKF. From these figures we conclude that in this example the GHF has a substantial improved performance, the GHF performs a little better than the CDF. Example 8.3: We further discuss the three-dimensional continuous signal process the discrete observation process (8.8) (8.9) where are zero mean uncorrelated noises with covariances given by respectively. The function is given by (8.7) is the Lorenz equation in which are positive parameters. The three parameters have a great deal of impact on The above system states represent altitude (in feet), velocity (in feet per second), constant ballistic

11 920 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Fig. 2. The average of root mean square error. Fig. 3. The average of root mean square error. coefficient (in per second), respectively. The detailed physical meaning of the system its parameters can be found in [2]. We chose this example as a benchmark because it contains significant nonlinearities in the signal observation processes had been discussed widely in the literature. In the previous literature, a fourth-order Runge Kutta scheme with 64 steps between each observation is employed for numerical integration of (8.8) in order to deal with the significant nonlinearity of signal process [2], [12]. That is, in the predicator steps of EKF JUF, the means are calculated using the numerical scheme. Then their covariance are propagated from the th to th step using

12 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 921 Fig. 4. The average of root mean square error. Fig. 5. The absolute value of average altitude error. with where was evaluated at Thus, In the comparison, we implement the proposed approximation schemes (6.7) (6.9) to solve the filtering problem of (8.8), (8.9). In the predictor step of GHF, we directly use the Euler approximation of signal process (8.8) based on 32 or 64 steps (i.e., 32 or 64) between each observation (they are shortly denoted by GHF32 or GHF64, respectively). However, in NGHF the signal process system (8.8) is rewritten as the discrete form

13 922 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Fig. 6. The absolute value of average velocity error. Fig. 7. The absolute value of average x -error. where is the approximation of the process at function with We use the following data: the system parameters are chosen as the initial condition is We consider the time from 0 to 30. The initial estimate is with covariance which is the fourth-order Runge Kutta scheme. Specifically, in NGHF64.

14 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 923 Fig. 8. The comparison of signal; estimates of two mixed Gaussian filter optimal filters. Fig. 9. The weights of two mixed Gaussian filters. We also choose the optimization number in JUF for the three-dimensional system. In Figs. 5 7 we show the absolute value of average error for each component committed by the algorithms of the Gauss Hermite quadrature rule, the JU-rule, the EKF, respectively, across a simulation consisting of 50 runs. These figures show that in this example the GHF has superior performance than the JUF the EKF. A. Mixed Gaussian Example In this section we first demonstrate a nearly optimal performance of the mixed Gaussian method described in Section V. We consider the same example as in Example 8.1 with We employ the two Gaussian distributions starting from

15 924 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Fig. 10. The comparison of the probability density functions of mixed Gaussian filter optimal filter at t =1: Fig. 11. The comparison of the probability density functions of mixed Gaussian filter optimal filter at t =2:75: respectively, the initial weights are In order to demonstrate the effectiveness of the proposed update (5.5), (5.6) we jumped the process at [which corresponds to in the continuous time process (8.4)] to It may represent an impulse force at We compare the performance of the mixed Gaussian filter against the one of the optimal Zakai filter (6.4) with in Figs Here GHF1 (GHF2) represents the three-point Gauss Hermite filter which is the first (second) component of the two mixed Gaussian filter with Weight 1 (Weight 2). We approximate the Zakai equation by the operator-splitting method as described in [10]. We observed that the mixed Gaussian filter performs nearly optimally up to We also note that the Zakai filter is no longer optimal after because of the jump. As seen as Figs. 8 9, the update formula (5.5),

16 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 925 Fig. 12. The comparison of altitude errors of each Gaussian filter. Fig. 13. The comparison of velocity errors of each Gaussian filter. (5.6) quickly captures the phase change from the one steady point to the other The observations are also supported by the comparison in Figs Example 8.3 (Revisited): Next, we also apply our mixed Gaussian method to Example 8.3. We employ the two Gaussian distribution starting from with the same covariance

17 926 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Fig. 14. The comparison of x errors of each Gaussian filter. Fig. 15. The weights of two mixed Gaussian filters. respectively, the initial weights are We demonstrate the performance of the mixed Gaussian filter with in Figs We see that the mixed Gaussian filter based on the update formula (5.5), (5.6) captures the signal process very well. That is, the weights of the two mixed Gaussian filters shown in Fig. 15 effectively pick up the combined performance of the Gaussian filters: GHF1 GHF2. More detailed discussions numerical testings of the mixed Gaussian filter will be presented in the forthcoming paper. IX. CONCLUSIONS The paper presents the systematic formulation of Gaussian filters mixed Gaussian filters based on the efficient numerical integration of the Bayesian formula for optimal recursive filter. Based on our formulation we develop the two filter algorithms, namely, the Gauss Hermite filter (GHF) the central difference filter (CDF). We demonstrated the feasibility of our proposed filter algorithms for testing nonlinear filtering prob-

18 ITO AND XIONG: GAUSSIAN FILTERS FOR NONLINEAR FILTERING PROBLEMS 927 lems. Our numerical results indicate that both the Gauss Hermite filter the central difference filter have superior performance to the filter of Julier Uhlmann the extended Kalman filter. We also proposed the new update rules for the Gauss sum filters show that they can perform near optimally. ACKNOWLEDGMENT The authors thank the referees for their careful reviews helpful suggestions. REFERENCES [1] D. L. Alspace H. W. Sorenson, Nonlinear Bayesian estimation using Gaussian sum approximation, IEEE Trans. Automat. Contr., vol. 17, pp , [2] M. Athans, R. P. Wishner, A. Bertolini, Suboptimal state estimation for continuous-time nonlinear systems from discrete noise measurements, IEEE Trans. Automat. Contr., vol. 13, pp , [3] A. Bensoussan, Nonlinear filtering theory, in Progress in Automation Information Systems, Recent Advances in Stochastic Calculus,J.S. Baras V. Mirelli, Eds. New York: Springer-Verlag, [4] A. Bensoussan, R. Glowinski, A. Rascanu, Approximation of the Zakai equation by the splitting up method, SIAM J. Contr. Optim., vol. 28, pp , [5] Y. Bar-Shalom X. R. Liu, Estimation Tracking: Principles, Techniques, Software. New York: Artech, [6] J. E. Dennis R. B. Schnabel, Numerical Methods for Unconstrained Optimization Nonlinear Equations. Englewood Cliffs, NJ: Prentice-Hall, [7] C. P. Fang, New algorithms of Gaussian assumed density filter a convergence result, IEEE Trans. Automat. Contr.. [8] P. Florchinger F. Le Gl, Time-discretization of the Zakai equation for diffusion process observed in correlated noise, in Proc. 9th Conf. Analysis Optimization of Systems, A. Bensoussan J. L. Lions, Eds. New York: Springer-Verlag, 1990, vol (Lecture Notes on in Control Inform. Sci.). [9] G. H. Golub, Some modified matrix eigenvalue problems, SIAM Rev., vol. 15, pp , [10] K. Ito, Approximation of the Zakai equation for nonlinear filtering, SIAM J. Contr. Optim., vol. 34, pp , [11] A. H. Jazwinski, Stochastic Process Filtering Theory. New York: Academic, [12] S. J. Julier J. K. Uhlmann. (1994) A general method for approximating nonlinear transformations of probability distributions. [Online]. Available: www [13] S. J. Julier, J. K. Uhlmann, H. F. Durrant-Whyte, A new approach for filtering nonlinear systems, in Proc. Amer. Contr. Conf., Seattle, WA, June 1995, pp [14] R. E. Kalman, A new approach to linear filtering prediction problems, Trans. ASME, J. Basic Eng., vol. 82D, pp , Mar [15] H. J. Kushner, Approximation to optimal nonlinear filters, IEEE Trans. Automat. Contr., vol. 12, pp , [16], private communication. [17] V. Mazya G. Schmidt, On approximate approximations using gaussian kernels, IMA J. Numer. Anal., vol. 16, pp , [18] M. Piccioni, Convergence of implicit discretization schemes for linear differential equations eith application to filtering, in Stochastic Partial Differential Equations Application, G. Da Prato L. Tabbaro, Eds. Trento, 1985, vol (Lecture Note in Mathematics). [19] B. L. Rozovskii, Stochastic Evolution Systems, Linear Theory Application to Nonlinear Filtering. Norwell, MA: Kluwer, [20] H. W. Sorenson D. L. Alspace, Recursive Bayesian estimation using Gaussian sums, Automatica, vol. 7, pp , Kazufumi Ito received the Ph.D. degree in system science mathematics from Washington University in Subsequently, he held the positions in ICASE, Brown University University of Southern California before joining the Department of Mathematics, North Carolina State University, where he is currently a Professor. His research interests include nonlinear filtering, stochastic process, control analysis of partial differential equations. He co-organized an AMS-IMS-SIAM Joint Summer Research Conference on Identification Control in Systems governed by PDE s, 1992 the workshop on Stochastic Control Nonlinear filtering, at NC State University, Kaigqi Xiong received the M.S. Ph.D. degrees in applied mathematics from the Claremont Graduate School in , respectively. In , he was also a Researcher of Electrical Engineering at the University of California, Riverside. From , he was a Visiting Assistant Professor in the Center for Research in Scientific Computation at the North Carolina State University. As a Researcher, he is visiting the Department of Mechanical Aerospace Engineering at the University of California, Irvine. His research interests include nonlinear systems, mathematical control theory, filtering algorithms, parameter estimation, flight control, neural networks applications.

A New Nonlinear Filtering Method for Ballistic Target Tracking

A New Nonlinear Filtering Method for Ballistic Target Tracking th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering

More information

Extension of the Sparse Grid Quadrature Filter

Extension of the Sparse Grid Quadrature Filter Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION

NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION Kyle T. Alfriend and Deok-Jin Lee Texas A&M University, College Station, TX, 77843-3141 This paper provides efficient filtering algorithms

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

A NEW NONLINEAR FILTER

A NEW NONLINEAR FILTER COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Nonlinear State Estimation! Particle, Sigma-Points Filters!

Nonlinear State Estimation! Particle, Sigma-Points Filters! Nonlinear State Estimation! Particle, Sigma-Points Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Particle filter!! Sigma-Points Unscented Kalman ) filter!!

More information

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems Systems Science and Applied Mathematics Vol. 1 No. 3 2016 pp. 29-37 http://www.aiscience.org/journal/ssam Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear

More information

An Introduction to the Kalman Filter

An Introduction to the Kalman Filter An Introduction to the Kalman Filter by Greg Welch 1 and Gary Bishop 2 Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 275993175 Abstract In 1960, R.E. Kalman

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen

More information

On Continuous-Discrete Cubature Kalman Filtering

On Continuous-Discrete Cubature Kalman Filtering On Continuous-Discrete Cubature Kalman Filtering Simo Särkkä Arno Solin Aalto University, P.O. Box 12200. FI-00076 AALTO, Finland. (Tel: +358 50 512 4393; e-mail: simo.sarkka@aalto.fi) Aalto University,

More information

Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems

Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 2011 Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems S. Andrew Gadsden

More information

IN neural-network training, the most well-known online

IN neural-network training, the most well-known online IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

SQUARE-ROOT CUBATURE-QUADRATURE KALMAN FILTER

SQUARE-ROOT CUBATURE-QUADRATURE KALMAN FILTER Asian Journal of Control, Vol. 6, No. 2, pp. 67 622, March 204 Published online 8 April 203 in Wiley Online Library (wileyonlinelibrary.com) DOI: 0.002/asjc.704 SQUARE-ROO CUBAURE-QUADRAURE KALMAN FILER

More information

The Scaled Unscented Transformation

The Scaled Unscented Transformation The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented

More information

MATHEMATICAL METHODS INTERPOLATION

MATHEMATICAL METHODS INTERPOLATION MATHEMATICAL METHODS INTERPOLATION I YEAR BTech By Mr Y Prabhaker Reddy Asst Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad SYLLABUS OF MATHEMATICAL METHODS (as per JNTU

More information

PARTIAL DIFFERENTIAL EQUATIONS

PARTIAL DIFFERENTIAL EQUATIONS MATHEMATICAL METHODS PARTIAL DIFFERENTIAL EQUATIONS I YEAR B.Tech By Mr. Y. Prabhaker Reddy Asst. Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad. SYLLABUS OF MATHEMATICAL

More information

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA 1 BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA This part of the Basic Exam covers topics at the undergraduate level, most of which might be encountered in courses here such as Math 233, 235, 425, 523, 545.

More information

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Conditions for Suboptimal Filter Stability in SLAM

Conditions for Suboptimal Filter Stability in SLAM Conditions for Suboptimal Filter Stability in SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, UPC-CSIC Llorens Artigas -, Barcelona, Spain

More information

Error analysis of dynamics model for satellite attitude estimation in Near Equatorial Orbit

Error analysis of dynamics model for satellite attitude estimation in Near Equatorial Orbit International Journal of Scientific and Research Publications, Volume 4, Issue 10, October 2014 1 Error analysis of dynamics model for satellite attitude estimation in Near Equatorial Orbit Nor HazaduraHamzah

More information

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London

More information

Constrained State Estimation Using the Unscented Kalman Filter

Constrained State Estimation Using the Unscented Kalman Filter 16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Applied Numerical Analysis

Applied Numerical Analysis Applied Numerical Analysis Using MATLAB Second Edition Laurene V. Fausett Texas A&M University-Commerce PEARSON Prentice Hall Upper Saddle River, NJ 07458 Contents Preface xi 1 Foundations 1 1.1 Introductory

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations Outline ODEs and initial conditions. Explicit and implicit Euler methods. Runge-Kutta methods. Multistep

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

WE consider the problem of estimating a time varying

WE consider the problem of estimating a time varying 450 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 61, NO 2, JANUARY 15, 2013 D-MAP: Distributed Maximum a Posteriori Probability Estimation of Dynamic Systems Felicia Y Jakubiec Alejro Ribeiro Abstract This

More information

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory

More information

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Optimal Decentralized Control of Coupled Subsystems With Control Sharing IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 58, NO. 9, SEPTEMBER 2013 2377 Optimal Decentralized Control of Coupled Subsystems With Control Sharing Aditya Mahajan, Member, IEEE Abstract Subsystems that

More information

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

Computational Aspects of Continuous-Discrete Extended Kalman-Filtering

Computational Aspects of Continuous-Discrete Extended Kalman-Filtering Computational Aspects of Continuous-Discrete Extended Kalman-Filtering Thomas Mazzoni 4-3-7 Abstract This paper elaborates how the time update of the continuous-discrete extended Kalman-Filter EKF can

More information

Quadrature Filters for Maneuvering Target Tracking

Quadrature Filters for Maneuvering Target Tracking Quadrature Filters for Maneuvering Target Tracking Abhinoy Kumar Singh Department of Electrical Engineering, Indian Institute of Technology Patna, Bihar 813, India, Email: abhinoy@iitpacin Shovan Bhaumik

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem

A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem Gabriel Terejanu a Puneet Singla b Tarunraj Singh b Peter D. Scott a Graduate Student Assistant Professor Professor

More information

NUMERICAL MATHEMATICS AND COMPUTING

NUMERICAL MATHEMATICS AND COMPUTING NUMERICAL MATHEMATICS AND COMPUTING Fourth Edition Ward Cheney David Kincaid The University of Texas at Austin 9 Brooks/Cole Publishing Company I(T)P An International Thomson Publishing Company Pacific

More information

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS Miroslav Šimandl, Jindřich Duní Department of Cybernetics and Research Centre: Data - Algorithms - Decision University of West

More information

A new unscented Kalman filter with higher order moment-matching

A new unscented Kalman filter with higher order moment-matching A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis Of The Non-Stationary Signals

A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis Of The Non-Stationary Signals International Journal of Scientific & Engineering Research, Volume 3, Issue 7, July-2012 1 A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis

More information

Fault Detection and Diagnosis of an Electrohydrostatic Actuator Using a Novel Interacting Multiple Model Approach

Fault Detection and Diagnosis of an Electrohydrostatic Actuator Using a Novel Interacting Multiple Model Approach 2011 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 01, 2011 Fault Detection and Diagnosis of an Electrohydrostatic Actuator Using a Novel Interacting Multiple Model

More information

Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator

Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator Andreas Rauh, Kai Briechle, and Uwe D Hanebec Member, IEEE Abstract In this paper, the Prior Density Splitting Mixture

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener NONLINEAR OBSERVERS A. J. Krener University of California, Davis, CA, USA Keywords: nonlinear observer, state estimation, nonlinear filtering, observability, high gain observers, minimum energy estimation,

More information

Nonlinear Filtering Revisited: a Spectral Approach, II

Nonlinear Filtering Revisited: a Spectral Approach, II Nonlinear Filtering Revisited: a Spectral Approach, II Sergey V. Lototsky Center for Applied Mathematical Sciences University of Southern California Los Angeles, CA 90089-3 sergey@cams-00.usc.edu Remijigus

More information

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science.

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science. STC Lecture Series An Introduction to the Kalman Filter Greg Welch and Gary Bishop University of North Carolina at Chapel Hill Department of Computer Science http://www.cs.unc.edu/~welch/kalmanlinks.html

More information

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents Navtech Part #s Volume 1 #1277 Volume 2 #1278 Volume 3 #1279 3 Volume Set #1280 Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents Volume 1 Preface Contents

More information

Filter Design for Linear Time Delay Systems

Filter Design for Linear Time Delay Systems IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001 2839 ANewH Filter Design for Linear Time Delay Systems E. Fridman Uri Shaked, Fellow, IEEE Abstract A new delay-dependent filtering

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming Control and Cybernetics vol. 35 (2006) No. 4 State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming by Dariusz Janczak and Yuri Grishin Department

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.

More information

Dynamic Data Factorization

Dynamic Data Factorization Dynamic Data Factorization Stefano Soatto Alessro Chiuso Department of Computer Science, UCLA, Los Angeles - CA 90095 Department of Electrical Engineering, Washington University, StLouis - MO 63130 Dipartimento

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

Optimum Sampling Vectors for Wiener Filter Noise Reduction

Optimum Sampling Vectors for Wiener Filter Noise Reduction 58 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Optimum Sampling Vectors for Wiener Filter Noise Reduction Yukihiko Yamashita, Member, IEEE Absact Sampling is a very important and

More information

Outline. Random Variables. Examples. Random Variable

Outline. Random Variables. Examples. Random Variable Outline Random Variables M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Random variables. CDF and pdf. Joint random variables. Correlated, independent, orthogonal. Correlation,

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems 668 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2002 The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems Lei Zhang, Quan

More information

Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm

Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm Tactical Ballistic Missile Tracking using the Interacting Multiple Model Algorithm Robert L Cooperman Raytheon Co C 3 S Division St Petersburg, FL Robert_L_Cooperman@raytheoncom Abstract The problem of

More information

OPTIMAL TIME TRANSFER

OPTIMAL TIME TRANSFER OPTIMAL TIME TRANSFER James R. Wright and James Woodburn Analytical Graphics, Inc. 220 Valley Creek Blvd, Exton, PA 19341, USA Abstract We have designed an optimal time transfer algorithm using GPS carrier-phase

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise

More information

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS AAS 6-135 A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS Andrew J. Sinclair,JohnE.Hurtado, and John L. Junkins The concept of nonlinearity measures for dynamical systems is extended to estimation systems,

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA RECURSIVE MULTIPLE WIENER INTEGRAL EXPANSION FOR NONLINEAR FILTERING OF DIFFUSION PROCESSES Published in: J. A. Goldstein, N. E. Gretsky, and J. J. Uhl (editors), Stochastic Processes and Functional Analysis,

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

PERIODIC KALMAN FILTER: STEADY STATE FROM THE BEGINNING

PERIODIC KALMAN FILTER: STEADY STATE FROM THE BEGINNING Journal of Mathematical Sciences: Advances and Applications Volume 1, Number 3, 2008, Pages 505-520 PERIODIC KALMAN FILER: SEADY SAE FROM HE BEGINNING MARIA ADAM 1 and NICHOLAS ASSIMAKIS 2 1 Department

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

Nonlinear Discrete-Time Observer Design with Linearizable Error Dynamics

Nonlinear Discrete-Time Observer Design with Linearizable Error Dynamics 622 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 48, NO. 4, APRIL 2003 Nonlinear Discrete-Time Observer Design with Linearizable Error Dynamics MingQing Xiao, Nikolaos Kazantzis, Costas Kravaris, Arthur

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION 1 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 3 6, 1, SANTANDER, SPAIN RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 135 New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks Martin Bouchard,

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

SIMULATION OF TURNING RATES IN TRAFFIC SYSTEMS

SIMULATION OF TURNING RATES IN TRAFFIC SYSTEMS SIMULATION OF TURNING RATES IN TRAFFIC SYSTEMS Balázs KULCSÁR István VARGA Department of Transport Automation, Budapest University of Technology and Economics Budapest, H-, Bertalan L. u. 2., Hungary e-mail:

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS P. Date paresh.date@brunel.ac.uk Center for Analysis of Risk and Optimisation Modelling Applications, Department of Mathematical

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

Control Theory in Physics and other Fields of Science

Control Theory in Physics and other Fields of Science Michael Schulz Control Theory in Physics and other Fields of Science Concepts, Tools, and Applications With 46 Figures Sprin ger 1 Introduction 1 1.1 The Aim of Control Theory 1 1.2 Dynamic State of Classical

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Simo Särkkä Aalto University, Finland November 18, 2014 Simo Särkkä (Aalto) Lecture 4: Numerical Solution of SDEs November

More information

Design and Stability Analysis of Single-Input Fuzzy Logic Controller

Design and Stability Analysis of Single-Input Fuzzy Logic Controller IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 30, NO. 2, APRIL 2000 303 Design and Stability Analysis of Single-Input Fuzzy Logic Controller Byung-Jae Choi, Seong-Woo Kwak,

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 10, OCTOBER

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 10, OCTOBER IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 10, OCTOBER 2010 4977 Cubature Kalman Filtering for Continuous-Discrete Systems: Theory and Simulations Ienkaran Arasaratnam, Simon Haykin, Life Fellow,

More information