Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator

Similar documents
Localization of DECT mobile phones based on a new nonlinear filtering technique

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering

A STATE ESTIMATOR FOR NONLINEAR STOCHASTIC SYSTEMS BASED ON DIRAC MIXTURE APPROXIMATIONS

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation

Dirac Mixture Density Approximation Based on Minimization of the Weighted Cramér von Mises Distance

Efficient Nonlinear Bayesian Estimation based on Fourier Densities

State Estimation using Moving Horizon Estimation and Particle Filtering

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

Sensor Fusion: Particle Filter

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

F denotes cumulative density. denotes probability density function; (.)

Evolution Strategies Based Particle Filters for Fault Detection

Lecture 2: From Linear Regression to Kalman Filter and Beyond

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations

Dynamic System Identification using HDMR-Bayesian Technique

Gaussian Filters for Nonlinear Filtering Problems

Remaining Useful Performance Analysis of Batteries

Sensor Tasking and Control

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS

A Comparison of Particle Filters for Personal Positioning

Finite-Horizon Optimal State-Feedback Control of Nonlinear Stochastic Systems Based on a Minimum Principle

Inferring biological dynamics Iterated filtering (IF)

Extension of the Sparse Grid Quadrature Filter

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Non-linear and non-gaussian state estimation using log-homotopy based particle flow filters

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

Gaussian Mixture Filter in Hybrid Navigation

Human Pose Tracking I: Basics. David Fleet University of Toronto

Particle Filters. Outline

Expectation propagation for signal detection in flat-fading channels

NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION

Autonomous Navigation for Flying Robots

in a Rao-Blackwellised Unscented Kalman Filter

Introduction to Particle Filters for Data Assimilation

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS. Michael Lunglmayr, Martin Krueger, Mario Huemer

Lecture 7: Optimal Smoothing

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Pattern Recognition and Machine Learning

Bayesian Inference for DSGE Models. Lawrence J. Christiano

A NEW NONLINEAR FILTER

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

Optimal Mixture Approximation of the Product of Mixtures

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Sequential Monte Carlo Methods for Bayesian Computation

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems

I. INTRODUCTION 338 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 33, NO. 1 JANUARY 1997

Cross entropy-based importance sampling using Gaussian densities revisited

Data Assimilation for Dispersion Models

Acceptance probability of IP-MCMC-PF: revisited

The Kernel-SME Filter with False and Missing Measurements

RESEARCH ARTICLE. Online quantization in nonlinear filtering

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem

A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Gate Volume Estimation for Target Tracking

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3

Variational Principal Components

Short tutorial on data assimilation

Data assimilation in high dimensions

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

State Estimation of Linear and Nonlinear Dynamic Systems

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Sigma Point Belief Propagation

A SQUARE ROOT ALGORITHM FOR SET THEORETIC STATE ESTIMATION

Mobile Robot Localization

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

Gaussian Process Approximations of Stochastic Differential Equations

Simultaneous state and input estimation with partial information on the inputs

A Matrix Theoretic Derivation of the Kalman Filter

Particle filters, the optimal proposal and high-dimensional systems

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Identification of Nonlinear Dynamic Systems with Multiple Inputs and Single Output using discrete-time Volterra Type Equations

The Expectation-Maximization Algorithm

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures

Comparison between Interval and Fuzzy Load Flow Methods Considering Uncertainty

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS

Particle Filter for Joint Blind Carrier Frequency Offset Estimation and Data Detection

5746 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 12, DECEMBER X/$ IEEE

A new unscented Kalman filter with higher order moment-matching

Content.

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Expectation Propagation for Approximate Bayesian Inference

Transcription:

Nonlinear Measurement Update and Prediction: Prior Density Splitting Mixture Estimator Andreas Rauh, Kai Briechle, and Uwe D Hanebec Member, IEEE Abstract In this paper, the Prior Density Splitting Mixture Estimator PDSME, a new Gaussian mixture filtering algorithm for nonlinear dynamical systems and nonlinear measurement equations, is introduced This filter reduces the linearization error which typically arises if nonlinear state and measurement equations are linearized to apply linear filtering techniques For that purpose, the PDSME splits the prior probability density into several components of a Gaussian mixture with smaller covariances The PDSME is applicable to both prediction and filter steps A measure for the linearization error similar to the Kullbac-eibler distance is introduced allowing the user to specify the desired estimation quality An upper bound for the computational effort can be given by limiting the maximum number of Gaussian mixture components I INTRODUCTION Common techniques for state and parameter estimation for nonlinear systems with stochastic uncertainties rely on the discretization of the state space using fixed or variable grids and particle filters, see eg [] [4] These techniques are only applicable to low-dimensional systems since the computational effort increases exponentially with the dimension of the state space Major problems are the necessity for adaptive grid sizes as well as variable numbers of grid points and particles to ensure successful recursive filtering Since the estimated probability density function PDF is represented by small numbers of particles in domains with small probability, resampling techniques are unavoidable to guarantee sufficient estimation quality for all possible states Other methods for recursive, nonlinear filtering rely on the approximation of PDFs by parameterizable functions Typically, Gaussian PDFs defined by mean and covariance are assumed Furthermore, algorithms using exponential PDFs have been investigated in related wor [5] For nonlinear systems, the assumption of a Gaussian PDF is always suboptimal if multi-modal or asymmetric PDFs which are significantly different from a Gaussian have to be represented One possibility to calculate the mean and covariance of Gaussian approximations of the posterior PDF is the numerical integration of the exact non-gaussian posterior PDF which is defined by the formula of Bayes [6], [7] Usually, this involves integration of non-gaussian PDFs over infinite Andreas Rauh is with the Chair of Mechatronics, University of Rostoc, D-859 Rostoc, Germany, AndreasRauh@uni-rostocde Kai Briechle is with Dr Johannes Heidenhain GmbH, D-833 Traunreut, Germany, Briechle@heidenhainde Uwe D Hanebec is with the Institute of Computer Science and Engineering, Universität Karlsruhe TH, D-768 Karlsruhe, Germany, UweHanebec@ieeeorg Parts of this wor were performed while all authors were with the Institute of Automatic Control Engineering, Technische Universität München, Munich, Germany intervals of a multi-dimensional state space To avoid numerical integration, Gaussian filters often rely on one of the following approximations i It is assumed that the prior PDFs of the state variables, their nonlinear transformations by the measurement or state equations, and the noise densities are jointly Gaussian distributed [8] ii Nonlinear system models are replaced by linearized measurement and state equations to apply the so-called Extended Kalman Filter EKF and to approximate the posterior density The well-nown Kalman filter [9] corresponds to the exact solution of Bayesian state estimation for linear systems with white Gaussian noise In this paper, a new approach of improving the estimation quality of Gaussian mixture filtering algorithms [], which are based on the linearization of the measurement and state equations, is introduced For each component of the prior PDF represented by a Gaussian mixture density, the corresponding posterior is approximated after linearization of both the measurement and state equations If a measure quantifying the approximation quality exceeds a user-defined bound, the number of prior density components is increased by a novel splitting procedure During splitting selected terms of the Gaussian mixture are subdivided into components with smaller covariances For other algorithms based on splitting or merging of PDFs, see eg [], [] In Section II, a precise problem formulation is given In Section III, an overview of the proposed filtering algorithm is presented Some measures for the distance between PDFs are reviewed in Section IV to determine the estimation quality of the proposed algorithm In Section V, the new Prior Density Splitting Mixture Estimator PDSME is presented The filter step for nonlinear models of measurement processes is derived in the Subsections V-A to V-D In Subsection V-E, the prediction step of the PDSME is described for nonlinear state equations In Section VI, selected examples are presented to demonstrate the performance of the PDSME Finally, in Section VII, the paper is concluded II PROBEM FORMUATION In this paper, Bayesian state estimation for multidimensional nonlinear discrete-time systems is considered These systems consist of nonlinear state equations x + a x + w and nonlinear models of measurement processes ŷ h x + v with a : D, a C D,, h : D R z, h C D,R z, and D open

Throughout this paper, the system noise w and the measurement noise v are normally distributed with f w, w N µ w,,c w, and f v, v N µ v,,c v,, resp, where N µ,c denotes a Gaussian PDF with mean µ and covariance C The estimation process can be subdivided into filter and prediction steps In the filter step, the state estimate is updated according to the formula of Bayes f f x p e x f v, ŷ h x ŷ 3 x f v, ŷ h x dx f p by considering the measured data ŷ [6], [7] Prior nowledge about uncertainties of the state vector x is described by the density f p x Equation 3 provides the exact solution for the posterior PDF f x e ŷ According to formula 3, the exact posterior PDF f x e ŷ does not remain Gaussian for nonlinear measurement equations h x, even if both f p and f v, are Gaussian The analytic expression for the posterior PDF is becoming more and more complicated with each additional measured value used for the recursive update of the state estimate Furthermore, the theoretically exact solution of the prediction step, handling the dynamics of nonlinear state equations, involves the evaluation of multi-dimensional convolution integrals x+ f p + f e x f w, x+ a x dx 4 In general, the predicted PDF f+ p x+ can only be computed numerically Analytic solutions only exist for a few special cases such as Gaussian or Gaussian mixture densities f e x with linear functions a x To avoid numerical evaluation of multi-dimensional convolution integrals, efficient methods for the approximation of the PDFs 3 and 4 are derived in the following Although, the case of Gaussian system and measurement noise is considered, the generalization to non-gaussian noise described by Gaussian mixture representations is straightforward yielding a weighted superposition of the results obtained for Gaussian noise To describe the posterior PDF f x e ŷ, approximations using Gaussian mixture densities f e x ŷ j j f e, j x ω e, j exp [ x µ e, j π n C e, j C e, j x x ] with a variable number of components are determined Analogously, the predicted PDF f+ p x+ is approximated by a Gaussian mixture defined by the weighting factors, the means µ p, j j, and the covariances Cp, ω p, j + + + 5 III THE PROPOSED FITERING AGORITHM: PDSME In this Section, an overview of the proposed PDSME is given For both the filter and prediction step three components of the algorithm are defined These components are a splitting procedure for the prior density, the calculation of a filter step for Gaussian mixtures using a ban of EKFs, and the merging of posterior Gaussian mixture components To simplify the introduction of these components, only the filter step upper part of Fig is described in this Section The prediction step lower part of Fig consists of the same three components ŷ, h x, µ v, C v Calculation of linearization error: splitting of prior density function Reduction of number of Gaussian mixture components merging step ω p,, µ p,, C p, EKF ω e,, µ e,, Ce, ω p,, µ p,, C p, EKF ω e,, µ e,, Ce, ω p, ω p,, µ p,, C p, ω p,, µ p,, C p, ω p, a x, µw, C w, µ p,, C p,, µ p,, C p, Output of estimated value: characterized by mean and covariance Filter Step EKF Prediction Step EKP EKP EKP ω e,, µ e,, Ce, ω e,, µe,, Ce, ω e,, µe,, Ce, ω e,, µe,, Ce, Reduction of number of Gaussian mixture components merging step Calculation of linearization error: splitting routine µ x, C x Filter Step Prediction Unit Delay Fig Overview of the PDSME algorithm: linearized filter step upper part and linearized prediction step lower part First, for each component of the prior PDF described by a Gaussian mixture with components, a measure quantifying the linearization error is computed This measure compares the approximated posterior PDF to the exact result of the estimation step The approximation is calculated after replacement of the nonlinear measurement equation h x by its linearization h x, evaluated at the means µ p, j, j,,, of the prior PDF The measure for the linearization

error, which is similar to the Kullbac-eibler distance [3], is minimized by splitting components of the prior PDF to achieve a user-defined estimation quality Second, the filter step is evaluated for each Gaussian mixture component using a ban of EKFs This leads to a Gaussian mixture representation of the density after the measurement update For nonlinear measurement models, the number of mixture components is usually increased compared to the prior PDF Hence, a merging step is introduced as the third component of the PDSME to reduce redundancy in the density representation as well as the number of mixture components The merging step is the prerequisite to use the PDSME recursively It prevents an unlimited growth of the number of Gaussian mixture components An upper bound for the computational effort can be specified by limiting the maximum number of components approximating the posterior PDFs For the purpose of recursive state estimation, this filter step is followed by a sequence of further filter and prediction steps, resp As shown in the lower part of Fig, the prediction step consists of the same components as the filter step In Section V, further details on the calculation of both the filter and prediction step of the PDSME are given IV DISTANCE BETWEEN PROBABIITY DENSITIES Calculating a measure for the distance between the exact PDF f x and its Gaussian approximation f x is one of the main aspects of the PDSME This section gives a brief summary of some measures similar to the Kullbac-eibler distance [3] that can be used for this purpose The Kullbac-eibler distance between two PDFs f x and f x is defined by f x D K : f x ln dx 6 f x or, alternatively, by D K Obviously, the Kullbac- eibler distance is not symmetric, ie, D K D K because this distance measure depends upon the support of the two PDFs that are compared Symmetry can eg be achieved by the arithmetic average J f, f : DK + DK 7 of D K and DK, see Jeffreys distance in [4], or by the resistor-average distance [5] R f, f : + 8 D K D K For identically normalized PDFs f x and f x, the Kullbac-eibler distance is always non-negative, ie, D K and DK hold Then, DK D K also only holds for f x f x In this paper, identical normalization of the exact and approximated posterior mixture components is not guaranteed Thus, alternative, non-negative definitions corresponding to cost functions to be minimized are required which provide information about the components of the prior PDF which are mapped worst onto the posterior The definition 6, and therefore also 7 and 8, involve the integration of the exact, non-gaussian posterior PDF f x defined by the formula of Bayes Since these integrals can only be evaluated analytically for a few special cases and since symmetry is of minor importance in this application, the definition D i : f x ln i f x dx 9 f x with i N, obtained by modification of 6, is introduced It allows to detect the mixture component with the largest deviation between f x and f x In the following, only the case i is considered As shown in Subsection VI-A, the value of this integral can be calculated analytically for polynomial and trigonometric measurement equations involving sine and cosine as a linear combination of the moments of the Gaussian density f x V PRIOR DENSITY SPITTING MIXTURE ESTIMATOR In the Subsections V-A to V-D, the filter step of the PDSME is described Then, in Subsection V-E, the prediction step of the PDSME is presented A inearization Error in the Filter Step At the beginning of the filter step, the linearization error D f e, j f e, j according to 9 is calculated for each component x, j,,, of the prior density f p x The Gaussian mixture component } j max : argmax {D f e, j f e, j j,, is identified and split into R components with smaller covariances to reduce the linearization error Splitting is applied as long as the total linearization error D,sum : j D f e, j f e, j ε and the maximum linearization error D,max ε of the Gaussian mixture with the increased number of + R components exceed bounds ε and ε representing the approximation quality specified by the user B Splitting of the Prior Density Function To reduce the on-line computational effort, the splitting library required for replacement of max x by R components with smaller covariances, has been optimized off-line, such that a standard normal distribution of a scalar variable can be replaced by a Gaussian mixture approximation with R components This splitting library is defined by the weighting factors ω R,ζ, means µ R,ζ, and standard deviations σ R,ζ for all ζ,, R Example V For R 4, a set of parameters of a splitting library is given in Tab I In Section VI, these parameters are used to demonstrate the performance of the PDSME

Fig TABE I SPITTING IBRARY WITH R 4 MIXTURE COMPONENTS x h x, h x f e, j f e, j D ζ 3 4 ω R,ζ 93 47 47 93 µ R,ζ 47 447 447 47 σ R,ζ 675 675 675 675 4 35 3 5 5 5-3 - - x 3 8 6 4 - - nonlinear measurement equation linearization for 4 prior density function with splitting of the prior density into 4 mixture components linearization for -3 - - 3 x linearization error for linearization error for 4-3 - - 3 x Reduction of the linearization error by prior density splitting Example V In Fig, the ey idea of the PDSME to reduce the linearization error by splitting the prior PDF into a Gaussian mixture with smaller covariances is shown First, a Gaussian PDF with µ p and σ p is replaced by R 4 Gaussian mixture components with the parameters from Tab I Second, the linearization of the measurement equation ŷ x + v at the means µ p, j, j,,, is depicted for both representations of the PDF, ie, for and 4 Third, the corresponding linearization errors are shown for each component of the PDF with µ v, σ v 5, and ŷ 75 In this example, the maximum linearization error D,max is reduced from 696 to 5 Furthermore, the total error D,sum is reduced from 696 to 35 by splitting of the prior PDF into 4 components For a non-axis-parallel Gaussian mixture component, characterized by ω p, j max, µ p, j max, and C p, j max x, splitting is performed according to µ p,α,,α n C p,α,,α n x ω p,α,,α n P p, j max P p, j max x x n i ω p, j max µ R,α µ R,αn diag + µ p, j max σ R,α σ R,α n ω R,αi,, and 3 P p, j max x T, 4 where P p, j max x denotes the Cholesy decomposition of C p, j max x The original component max x of the prior density is replaced by R n components denoted by the indices α i,, R with i,,n The increase in the number of mixture components can be limited if splitting is restricted to selected components of the state vector C Calculation of the Filter Step In the filter step upper part of Fig, the measurement equation h x is replaced by its linearization h x h x : H j + H j x 5 µ p, j µ p, j at the means µ p, j, j,,, of all components of the prior density f p x with H j : H j µ p, j h x 6 x x µ p, j and H j : H j µ p, j h µ p, j H j µ p, j µ p, j 7 Using a ban of EKFs, the approximation N µ e, j,ce, j x of each component of the posterior PDF is given by the mean µ e, j µ p, j + K ŷ H j H j µ p, j µ 8 v and the covariance with C e, j x Cp, j x K H j Cp, j x 9 [ K : C p, j x H j T ] H j Cp, j x H j T + Cv The result of the filter step is therefore a Gaussian mixture f e x ω e, j N µ e, j,ce, j x j

with the weighting factors ω e, j p, j C e, j [ x ω C p, j exp µ e, j x µ p, j C e, j x [ ] exp ŷ H j H j µe, j µ v C v π z C v D Merging of Posterior Mixture Components ] In the merging step, groups of two or more overlapping Gaussian mixture components of the posterior PDF are replaced by a single Gaussian component, if the additional error introduced by this merging step is negligible For that purpose, all possible candidates for replacement are determined: After interpreting the components f e, j x, j,,, of the Gaussian mixture f e x as the vertices of a complete graph G, the Mahalanobis distance [6] M i, j µ e,i µe, j C e,i x +Ce, j x 3 between two different Gaussian mixture components f e,i x and f e, j x, i j, is assigned to the corresponding edge of G All edges with M i, j > ε M are deleted from G The Gaussian mixture components belonging to each maximal connected subgraph C ξ of the remaining graph G, are replaced by a Gaussian approximation f e,ξ x with the same mean and covariance [7], [8] To find out whether these replacements are acceptable the integral quadratic distance D quad,ξ f e, j x f e,ξ x j C ξ dx 4 is calculated for each subgraph C ξ This distance measure is preferred over the Kullbac-eibler-lie distances discussed in Section IV, because the integrand of 4 can be rewritten analytically as a Gaussian mixture, if the approximation error between a Gaussian and a Gaussian mixture density is calculated Hence, this integral can always be solved analytically In contrast to 4, all Kullbac-eibler-lie distances contain the logarithm of a Gaussian Therefore, in the merging step, the Kullbac-eibler-lie distances always have to be integrated numerically E Calculation of the Prediction Step Up to now, only the filter step of the PDSME, consisting of the calculation of a measure for the linearization error, the splitting of the prior density, the calculation of the measurement update, and the merging step has been discussed In this subsection, the prediction step of the PDSME, see the lower part of Fig, is described Analogously to the filter step, the state equation is approximated by a linearization a x a x : A j µ e, j + A j µ e, j x 5 at the means µ e, j The linearization error corresponding to 9 is D f + p, j + p, j f x+ p, j f + x+ ln + + x+ dx + 6 with the approximated component + x+ f e, j x f w x+ a x dx 7 }{{} f,+x p, j,x + of the predicted PDF and the exact component + x+ f e, j x f w x+ a x dx 8 }{{} f,+x p, j,x + Due to the convolution integral, the exact predicted density can only be evaluated numerically for nonlinear state equations a x Therefore, the linearization error D,+,+ ln f p, j,+,+ p, j f,+ x,x + 9 x,x + dx +dx x,x + in the prediction step is defined for the approximation of the exact joint PDF component,+ x,x + in 8 by the Gaussian mixture component,+ x,x + in 7 After this simplification, there is almost no difference to the filter step, except for that fact, that the joint density depends upon the state variables X [ x T, T +] xt with dimx n Again, the component j max of f e x with the largest linearization error D,max is replaced by R n Gaussian mixture components with smaller covariances The parameters of each component of the predicted density are ω p, j + ωe, j, 3 µ p, j + A j + A j µe, j + µ, and 3 w C p, j x+ A j Ce, j x A j T + Cw 3 Afterwards, a merging step as in Subsection V-D is applied A Scalar Filter Step VI SIMUATION RESUTS The filter step of the PDSME by splitting the prior PDF to reduce the linearization error 9 is demonstrated for the scalar quadratic measurement equation ŷ x + v 33 with Gaussian measurement noise v A Gaussian prior density with µ p and σ p is assumed In Fig 3, the result of the filter step is shown for a fixed measured value ŷ 75 and Gaussian measurement noise with µ v and σ v 5 In the diagrams, the comparison of the

approximated filter step to the exact posterior density is shown, after splitting the prior density into 4 mixture components For, the result of the PDSME is equivalent to a standard EKF fx 6 8 4 - - x fx 6 8 4 4 - - Fig 3 Comparison of the exact posterior density dashed lines to a Gaussian mixture approximation solid lines in the filter step As mentioned in Section IV, for polynomial measurement equations 33, the linearization error D f e, j f e, j ŷ h x σ v x f v ŷ h x }{{} f e, j + ŷ x dx is expressed in terms of moments of the Gaussian PDF f e, j B Scalar Prediction Step In the prediction step, the nonlinear transformation σ v x + x + w 34 of a random variable x with additive Gaussian system noise w is considered In this example, a Gaussian PDF f e x with µ x e 4 and σx e 8 has been assumed The noise w is described by µ w and σ w In Fig 4, the result of the prediction step of the PDSME shows that already for 5 the numerically calculated exact predicted PDF and its Gaussian mixture approximation are almost identical fx 4 3 - -5 5 x fx 4 3 5 - -5 5 Fig 4 Comparison of the exact predicted density dashed lines to approximations with different numbers of mixture components solid lines VII CONCUSIONS In this paper, an efficient Gaussian mixture filtering algorithm has been proposed This algorithm provides a solution for both the prediction and filter step In contrast to other Gaussian mixture filtering algorithms, in which nonlinear measurement and state equations are replaced by linearizations at the means of the prior density components, the x x PDSME is characterized by systematically splitting the prior density function using a pre-calculated optimized library of replacement densities The components to be split into terms with smaller covariances, are selected by a Kullbac-eiblerlie measure for the linearization error Re-approximation of the prior density by a Gaussian mixture with an increased number of components with smaller covariances reduces the linearization error and therefore improves the approximation quality, which can be specified by the user By limiting the maximum number of mixture components, the maximum computational effort can also be specified If the number of mixture components used by this algorithm is not limited, the result of the PDSME converges to the exact Bayesian state estimation For finite numbers of mixture components, the quality of the PDSME is always superior to the Extended Kalman Filter, which only uses a single Gaussian component for the approximation of the posterior density REFERENCES [] A Doucet, S Godsill, and C Andrieu, On Sequential Monte Carlo Sampling Methods for Bayesian Filtering, Statistics and Computing, vol, pp 97 8, [] H Tanizai, Nonlinear and Non-Gaussian State Space Modeling Using Sampling Techniques, Annals of the Institute of Statistical Mathematics, vol 53, no, pp 63 8, [3] M S Arulampalam, S Masell, N Gordon, and T Clapp, A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracing, IEEE Transactions On Signal Processing, vol 5, no, pp 74 88, February [4] E Bølvien, P J Aclam, N Christophersen, and J-M Størdal, Monte Carlo Filters for Non-linear State Estimation, Automatica, vol 37, no, pp 77 83, [5] A Rauh and U D Hanebec, Nonlinear Moment-Based Prediction Step for Exponential Densities, in Proc of the 44th IEEE Conference on Decision and Control and European Control Conference ECC 5, Seville, Spain, 5, pp 93 98 [6] A Papoulis, Probability, Random Variables, and Stochastic Processes Toyo: McGraw-Hill, 965 [7] F C Schweppe, Uncertain Dynamic Systems New Yor: Prentice- Hall, 973 [8] K Ito and K Xiong, Gaussian Filters for Nonlinear Filtering Problems, IEEE Transactions on Automatic Control, vol 45, no 5, pp 9 97, [9] R E Kalman, A New Approach to inear Filtering and Prediction Problems, Trans ASME, J Basic Eng, vol Series 8D, pp 35 45, 96 [] D Alspace and H Sorenson, Nonlinear Bayesian Estimation Using Gaussian Sum Approximations, IEEE Transactions on Automatic Control, vol 7, pp 439 448, 97 [] N Ueda, R Naano, Z Ghahramani, and G E Hinton, SMEM Algorithm for Mixture Models, Advances in Neural Information Processing Systems, vol NIPS, pp 599 65, 999 [], SMEM Algorithm for Mixture Models, Neural Computation, vol, no 9, pp 9 8, [3] S Kullbac and R A eibler, On Information and Sufficiency, Annals of Mathematical Statistics, vol, pp 79 86, 95 [4] H Jeffreys, An Invariant Form of the Prior Probability in Estimation Problems, Proc Roy Soc A, vol 86, pp 453 46, 946 [5] D H Johnson and S Sinanović, Symmetrizing the Kullbac- eibler Distance, Rice University, Tech Rep,, http://cmcriceedu/docs/docs/ JohMarSymmetrizipdf [6] P C Mahalanobis, On the Generalized Distance in Statistics, Proc Nat Inst of Sciences of India, vol, no, 936 [7] M West, Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models, Computing Science and Statistics, vol 4, pp 35 333, 993 [8] M West and P J Harrison, Bayesian Forecasting and Dynamic Models New Yor: Springer-Verlag, 997