in a Rao-Blackwellised Unscented Kalman Filter

Similar documents
The Scaled Unscented Transformation

The Unscented Particle Filter

Dual Estimation and the Unscented Transformation

Constrained State Estimation Using the Unscented Kalman Filter

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

Sigma Point Belief Propagation

A new unscented Kalman filter with higher order moment-matching

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS

Efficient Monitoring for Planetary Rovers

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

Gaussian Mixtures Proposal Density in Particle Filter for Track-Before-Detect

Reduced Sigma Point Filters for the Propagation of Means and Covariances Through Nonlinear Transformations

A Note on Auxiliary Particle Filters

Performance Evaluation of Local State Estimation Methods in Bearings-only Tracking Problems

Comparison of Kalman Filter Estimation Approaches for State Space Models with Nonlinear Measurements

Data Assimilation for Dispersion Models

Extension of the Sparse Grid Quadrature Filter

Incorporating Track Uncertainty into the OSPA Metric

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Expectation Propagation in Dynamical Systems

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

Unscented Kalman Filter/Smoother for a CBRN Puff-Based Dispersion Model

A New Nonlinear Filtering Method for Ballistic Target Tracking

Nonlinear State Estimation! Particle, Sigma-Points Filters!

State Estimation of Linear and Nonlinear Dynamic Systems

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

State Estimation using Moving Horizon Estimation and Particle Filtering

The Kalman Filter ImPr Talk

A Time-Varying Threshold STAR Model of Unemployment

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation

Ground Moving Target Parameter Estimation for Stripmap SAR Using the Unscented Kalman Filter

Introduction to Unscented Kalman Filter

Computers and Chemical Engineering

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

A Gaussian Mixture PHD Filter for Nonlinear Jump Markov Models

Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Λ

Acceptance probability of IP-MCMC-PF: revisited

An Improved Particle Filter with Applications in Ballistic Target Tracking

BAYESIAN MULTI-TARGET TRACKING WITH SUPERPOSITIONAL MEASUREMENTS USING LABELED RANDOM FINITE SETS. Francesco Papi and Du Yong Kim

A Comparison of Particle Filters for Personal Positioning

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

ROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1

State Estimation for Nonlinear Systems using Restricted Genetic Optimization

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

A STATE ESTIMATOR FOR NONLINEAR STOCHASTIC SYSTEMS BASED ON DIRAC MIXTURE APPROXIMATIONS

Auxiliary Particle Methods

Efficient Particle Filtering for Jump Markov Systems. Application to Time-Varying Autoregressions

An Brief Overview of Particle Filtering

NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION

A Multi-Modal Unscented Kalman Filter for Inference of Aircraft Position and Taxi Mode from Surface Surveillance Data

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Markov Chain Monte Carlo Methods for Stochastic Optimization

A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters

Particle Filtering Approaches for Dynamic Stochastic Optimization

Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems

SEQUENTIAL MONTE CARLO METHODS FOR THE CALIBRATION OF STOCHASTIC RAINFALL-RUNOFF MODELS

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

The Shifted Rayleigh Filter for 3D Bearings-only Measurements with Clutter

Particle Filter Track Before Detect Algorithms

TARGET TRACKING BY USING PARTICLE FILTER IN SENSOR NETWORKS

Conditional Posterior Cramér-Rao Lower Bounds for Nonlinear Sequential Bayesian Estimation

Evaluating the Consistency of Estimation

Particle Filtering for Data-Driven Simulation and Optimization

Self Adaptive Particle Filter

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Monte Carlo Approximation of Monte Carlo Filters

A COMPARISON OF JPDA AND BELIEF PROPAGATION FOR DATA ASSOCIATION IN SSA

CSCI-6971 Lecture Notes: Monte Carlo integration

Sensor Fusion: Particle Filter

A Tree Search Approach to Target Tracking in Clutter

arxiv: v1 [cs.ro] 1 May 2015

UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION

Nonlinear State Estimation! Extended Kalman Filters!

A Unifying Framework for Multi-Target Tracking and Existence

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

IN particle filter (PF) applications, knowledge of the computational

Higher-Degree Stochastic Integration Filtering

A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS. Michael Lunglmayr, Martin Krueger, Mario Huemer

The Hierarchical Particle Filter

Analytically-Guided-Sampling Particle Filter Applied to Range-only Target Tracking

Recurrent Neural Network Training with the Extended Kalman Filter

Autonomous Mobile Robot Design

A Sequential Monte Carlo Approach for Extended Object Tracking in the Presence of Clutter

RAO-BLACKWELLIZED PARTICLE FILTER FOR MARKOV MODULATED NONLINEARDYNAMIC SYSTEMS

Target Localization using Multiple UAVs with Sensor Fusion via Sigma-Point Kalman Filtering

Lecture 9. Time series prediction

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Particle Filters. Outline

Tracking an Accelerated Target with a Nonlinear Constant Heading Model

Transcription:

A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK. s.masell@signal.qinetiq.com Robert Wright QinetiQ Ltd. Malvern Technology Centre Malvern, UK. r.wright@signal.qinetiq.com Abstract The Unscented Kalman Filter offers significant improvements in the estimation of non-linear discretetime models in comparison to the Extended Kalman Filter 12. In this paper we use a technique introduced by Casella and Robert 2, nown as Rao-Blacwellisation, to calculate the tractable integrations that are found in the Unscented Kalman Filter. We show that this leads to a reduction in the quasi-monte Carlo variance, and a decrease in the computational complexity by considering a common tracing problem. Keywords: Tracing, filtering, estimation, unscented, Rao-Blacwellisation. 1 Introduction and Motivation The Unscented Kalman Filter () has been shown to offer significant improvements in the estimation of nonlinear discrete-time models in comparison to the Extended Kalman Filter () 12 in moderately nonlinear environments. The deterministically samples Sigma Points and propagates these points through the true models. In this paper we propose a method to reduce the quasi-monte Carlo variance by only resorting to this scheme for the intractable integrals that are present. Approaches have been proposed to reduce the number of sigma points needed to be sampled in a given dimensional space in order to reduce computation and storage expense (see 5, for example). This paper complements that previous research primarily by reducing the dimensionality of the state vector to be sampled. We therefore reduce the quasi-monte Carlo variance and increase the efficiency of the algorithm. The outline of this paper is as follows. In the following section, we introduce the general tracing problem formulated in a Rao-Blacwellised Unscented Kalman Filter () framewor. In Section 3, we show that exploiting certain structures in the models allows us to reduce the amount of approximation necessary. In Section 4, we motivate the ideas through a simple example and compare computational time and variances of the state estimates. Section 5 concludes the discussion. 2 Sequential Bayesian inference and the 2.1 Introduction It is convenient to define to index time and x as the state of the system at time. The measurement received at time is y and the set of measurements received up to time is then y 1:. We consider three inds of model for the evolution of the x : F x 1 + ɛ 1 x = f 1 (x 1 ) + ɛ 1 f 2 (x 1, ɛ 1 ) Linear Gaussian Non-Linear Gaussian Non-Linear Non-Gaussian (1) ɛ 1 N (ɛ 1 ;, Q 1 ), and N(x; m, P ) is a Gaussian density with argument x, mean m and covariance P. We also consider three inds of measurement model describing the relationship between y and x : Hx + v y = h 1 (x ) + v h 2 (x, v ) Linear Gaussian Non-Linear Gaussian Non-Linear Non-Gaussian v N (v ;, R ) is Gaussian noise that is independent of ɛ 1. 2.2 The Kalman filter Should both models be linear Gaussian then the Kalman filter is an optimal recursive technique that exactly calculates the parameters of p(x y 1: )4. (2)

The Kalman filter algorithm can then be viewed as the following: p(x 1 y 1: 1 ) = N(x 1 ; M, C) (3) ( ) x p(x, y y 1: 1 ) = N ; M y, C (4) with: p(x y 1: ) = N(x ; m, P ) (5) M = m 1 1 (6) C = P 1 1 (7) M µx = (8) µ y C Pxx P = xy Pxy T (9) P yy m = µ x + P xy P 1 yy (y µ y ) (1) P = P xx P xy P 1 yy P T xy (11) µ x = F m 1 1 (12) P xx = Q 1 + F P 1 1 F T (13) µ y = H µ x (14) P yy = H P xx H T + R (15) P xy = P xx H T. (16) In the above equations, the transpose of a matrix M is denoted by M T. The assumptions of the Kalman filter restrict its use in real situations. Often one approximates (1) and (2) to both be (locally) linear Gaussian and so obtain the Extended Kalman Filter (). As an alternative to the, one can use a sample based approximation to p (x 1 y 1: 1 ) and then a Gaussian approximation to p (x, y y 1: 1 ). In so doing one never has to approximate the models, but only p (x, y y 1: 1 ). 2.3 The Unscented Kalman filter The unscented transform has been used to conduct this approximate scheme7, 12. The resulting filter, the Unscented Kalman Filter (), considers a set of points that are deterministically selected from the Gaussian approximation to p(x 1 y 1: 1 ). These points are all propagated through the true models and the parameters of p (x, y y 1: 1 ) are then estimated from the transformed samples. The algorithm begins by drawing samples: 1 ɛ (i) 1 v (i) N x 1 ; M, C, (17) ɛ 1 v M = C = m 1 1 (18) P 1 1 Q 1 (19) R with associated weights w (i). The sampling is chosen according to an appropriate deterministic procedure 11, which is elaborated on in section 2.4. The update stage is the same as the Kalman filter equations, (1) and (11), now: µ x w (i) A (2) P xx w (i) A µ x A µ x T (21) µ y w (i) h 2 (A, v (i) ) (22) P yy w (i) h 2 (A, v (i) ) µ yh 2 (A, v (i) ) µ y T (23) P xy w (i) A µ x h 2 (A, v (i) ) µ y T, (24) A = f 2 ( 1, ɛ(i) 1 ) (25) 2.4 The choice of sigma points The examples and notations presented in this paper use the standard sampling and weighting procedures associated with the Unscented transform as presented in 7, 12. However, this is not the only sampling scheme available. Here we summarise several possible ways of choosing sigma points. Julier and Uhlmann 7 originally proposed placing sigma points at the mean and at the mean plus or minus one standard deviation in each dimension (using 2n + 1 points). This is often referred to as the symmetric sigma point solution. Using the same order of calculations as the Extended Kalman filter, the symmetric sigma point solution matches the first two moments (the mean and covariance). The scaled unscented transform 6 scales the effect of the higher order moments by giving flexibility as to how far out the sigma points are placed, whilst still matching the mean and covariance. By placing sigma points on the faces of a hypercube, one can achieve a second order unscented transform. However,

to achieve a fourth order unscented transform, and therefore match the urtosis, further sigma points must be added at the corners of the hypercube 8. Van Zandt 15 suggests greater robustness can be achieved by choosing points in a higher dimensional space and projecting them down; the greater number of points yielding improved accuracy. An alternative to this is the socalled minimal sew or simplex sigma point solution. By placing the sigma points on a simplex 9 one can match the mean and covariance whilst minimising the sew. This method only requires n + 1 sigma points (which can be calculated recursively) for an n-dimensional random variable as opposed to the 2n + 1 used in the symmetric and scaled schemes. Intuition tells us that if more sigma points are used then a more accurate estimate is achievable. However, the position of these sigma points is clearly important. If the same accuracy of estimate can be achieved with fewer but better placed sigma points then computational expense is ept to a minimum. Alternative choices of sigma point location is an active area of research 9 5 with all methods applicable to both the and the RB-. 3 Rao-Blacwellisation and the 3.1 Overview As explained earlier, the Kalman filter is optimal when the models are linear and Gaussian. When the Kalman filter assumptions do not hold, the is an appropriate sub-optimal technique that can improve performance over the. When tracing with linear state or measurement evolution models, one can conceptually use the for the non-linear non-gaussian (hard) portion of the problem and a Kalman filter for the remainder. Wan13 proposes a method for Rao-Blacwellisation in the presence of additive Gaussian noise. Although using the same concept, the approach presented here differs somewhat in that it only samples once, and so reduces the Monte-Carlo variance. Table 1 displays the different combinations of models and their associated sampling procedures; references to the appropriate sections in this paper appear in bracets after the state that needs to be sampled in each case. We advocate the use of particle filtering (PF) in scenarios for which the models are entirely non-linear and non-gaussian3. The following sections define the equations used in the prediction and update stages in the Kalman filter framewor, thereby replacing equations (12) to (16). We label each section with the definition of the system being considered in Table 1. 3.2 Algorithm 1: x = F x 1 + ɛ 1, y = h 1 (x ) + v µ x and P xx can be calculated exactly using (12) and (13). Then: N (x ; µ x, P xx ) (26) µ y w (i) h 1 ( ) (27) P yy w (i) h 1 ( ) µ yh 1 ( ) µ y T + R (28) P xy w (i) µ xh 1 ( ) µ y T (29) 3.3 Algorithm 2: x = F x 1 + ɛ 1, y = h 2 (x, v ) Again, µ x and P xx can be calculated exactly using (12) and (13). One then uses the following scheme: v (i) N ( x v ; µx, Pxx R ) (3) µ y w (i) h 2 (, vi ) (31) P yy w (i) h 2 (, vi ) µ y h 2 (, vi ) µ y T (32) P xy w (i) x µ x h 2 (, vi ) µ y T (33) 3.4 Algorithm 3: x = f 1 (x 1 ) + ɛ 1, In this case: y = Hx + v 1 N ( x 1 ; m 1 1, P 1 1 ) (34) µ x w (i) f 1 ( 1 ) (35) P xx w (i) f 1 ( 1 ) µ xf 1 ( 1 ) µ x T + Q 1 (36) µ y, P yy and P xy can then be calculated exactly using (14) to (16).

Table 1: Sampled variables and section references F x 1 + ɛ 1 f(x 1 ) + ɛ 1 f(x 1, ɛ 1 ) Hx + v Kalman Filter (2.2) x 1 (3.4) x 1, ɛ 1 (3.6) h(x ) + v x (3.2) x 1, ɛ 1 (3.5) x 1, ɛ 1 (3.7) h(x, v ) x, v (3.3) x 1, ɛ 1, v (3.8) (2.3)/ Particle Filter 3 3.5 Algorithm 4: x = f 1 (x 1 ) + ɛ 1, y = h 1 (x ) + v The advantage in this case is that some of the integrals can be calculated exactly: 1 ɛ (i) 1 M = C = N ( ) x 1 ; M, C ɛ 1 m 1 1 P 1 1 Q 1 (37) (38) (39) µ y w (i) h 1 B (4) P yy w (i) h 1 B µ y h 1 B µ y T + R (41) P xy w (i) B µ x h 1 B µ y T (42) B = f 1 (x i 1) + ɛ (i) 1 (43) µ x and P xx are calculated approximately as in (35) and (36). 3.6 Algorithm 5: x = f 2 (x 1, ɛ 1 ), y = Hx + v Here the samples are drawn as in (37) and then the (normal ) approximations in (2) and (21) are used to calculate µ x and P xx. µ y, P yy and P xy can then be calculated exactly using (14) to (16). 3.7 Algorithm 6: x = f 2 (x 1, ɛ 1 ), y = h 1 (x ) + v Again the samples are drawn as in (37); µ x and P xx are calculated approximately using (2) and (21). The remaining parameters are calculated as follows: µ y w (i) h 1 D (44) P yy w (i) h 1 D µ y h 1 D µ y T + R (45) P xy w (i) D µ x h 1 D µ y T (46) D = f 2 ( 1, ɛ(i) 1 ) (47) 3.8 Algorithm 7: x = f 1 (x 1 ) + ɛ 1, y = h 2 (x, v ) Samples are drawn as in (17). µ x and P xx are calculated approximately as in (35) and (36). The remaining relations are then: µ y w (i) h 2 (E, v (i) ) (48) P yy w (i) h 2 (E, v (i) ) µ yh 2 (E, v (i) ) µ y T (49) P xy w (i) E µ x h 2 (E, v (i) ) µ y T (5) 3.9 Comment E = f 1 ( 1 ) + ɛ(i) 1 (51) It is worth considering the Rao-Blacwellised Unscented Kalman filter algorithms fit into the wide range of filters readily available in the tracing literature. Clearly it is closely related to the Kalman filter and the traditional Unscented Kalman filter. The has been liened to the Linear Regression Kalman filter 1 and we believe that the non-linear Gaussian Rao-Blacwellised algorithms are indeed special cases of the Linear Regression Kalman filter. Any of the various methods of choosing sigma points (2.4) can be applied to the RB- in the same way as they would be applied to the.

4 Example A single target moves in two-dimensions and measurements of range and bearing are generated. The true range of the target varies between approximately 1m and 4m. The standard deviation of the range errors is 1m. Two cases were considered; a low noise scenario for which the bearing errors had.5 standard deviation and a high noise case for which the errors had a standard deviation of 3 (and for which the nonlinearity is more pronounced). The target exhibits almost straight-line motion for long periods before performing sharp turns. 1.5 2 x 14 Exemplar converted measurements Low noise High noise x (i) and P (i) are respectively the true state, estimated state and associated covariance at time in run i. In this example, N r = 1 was used for each noise level for each of the Extended Kalman filter, Unscented Kalman filter and the first Rao-Blacwellised Unscented Kalman filter algorithm 1. The measurements for one exemplar Monte Carlo run for both noise levels are shown, projected down into the positional space, in figure 1. Average RMS statistics for the two cases are shown in figures 2(a) and 2(b). Average NEES statsitics for are shown in figures 3(a) and 3(b). Note that the lower the RMS errors, the better the measure of performance, as the closer the NEES statistic is to the dimensionality of the measurement space (2 here), the better the measure of performance. 1.5 6 RMS against time for low noise sensor 5.5 4 1 RMS 3 1.5 2.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 1 4 2 1 Figure 1: Exemplar converted measurements The evolution is modelled using an Interacting Multiple Model algorithm with two (linear Gaussian) constant velocity models with low and high process noises, which respectively model the straight and cornering segments of the trajectory1. The Root Mean Squared error (RMS) statistic and the Normalised Estimation Error Squared (NEES) statistic 1 were calculated to compare the filters in terms of their estimates of position; 1 advocates the NEES statistic as a test for the consistency of the covariance matrix with respect to the size of the estimation errors (this statistic is related to the chi-squared test). The RMS and NEES statistics are defined as follows: ε RMS = ( m (i) ) 2 Nr ε NEES = 1 N r (x (i) m (i) ) T N r P (i) 1 ( x (i) m (i) ) (52) (53) N r is the number of Monte Carlo runs and x (i), RMS 5 1 15 2 25 35 3 25 2 15 1 5 (a) Low noise RMS against time for high noise 5 1 15 2 25 (b) High noise Figure 2: RMS Errors 1 Since the measurements are a nonlinear Gaussian function of the state and the state evolution is linear Gaussian, the appropriate algorithm is the first algorithm, described in section 3.2.

NEES NEES 25 2 15 1 5 NEES against time for low noise y=2 5 1 15 2 25 25 2 15 1 5 (a) Low noise NEES against time for high noise y=2 5 1 15 2 25 (b) High noise Figure 3: NEES Metric In terms of RMS errors, only at the higher noise level is the out-performed by both the and the (as observed in 12, 14). There is no discernible difference between the and the in terms of RMS errors. The NEES statistic for the three filters is similar in the low noise case, except during the turns when the s performance is slightly inferior. In the high noise case, the NEES statistic indicates that the performance of the deteriorates significantly; it drastically underestimates the covariance. Also, the then generally offers an improvement in NEES in comparison over that of the. Averaging the NEES value over time (as well as over the Monte-Carlo runs) gives values of 2.1 for the and 1.55 for the ; the overestimates the covariance. In this example, the non-linearity is evaluated 21 times per iteration by the. The Rao-Blacwellised only has to evaluate the same function 9 times each iteration. The improvement in computational efficiency is clear. 5 Conclusions In this paper we have developed a technique whose advantages are twofold; namely the reduction in quasi-monte Carlo variance, and the decrease in computational expense for high-dimensional problems. A Rao-Blacwellised () was developed and the algorithmic detail was outlined. The algorithm was demonstrated on a common tracing problem; non-linear (polar) Gaussian measurements of a system with linear Gaussian dynamics. The taes advantage of the linear system model reducing the sampling requirements of the and offering the potential for an improvement in performance. We demonstrate that a consistent covariance estimate is achieved via the NEES metric. This offers a mared improvement over both the Extended Kalman filter and the Unscented Kalman filter. 5.1 Applications It has been shown that improvements to the tracing performance of the Unscented Kalman filter can be achieved by only using the sampling scheme for the intractable integrals that are present. The authors believe that the impact of this approach will be most pronounced in scenarios the reduction in dimensionality of the space inhabited by the sigma points is significant. Acnowledgments This research was sponsored by the UK MOD Corporate Research Programme. References 1 Y. Bar-Shalom and X-R. Li. Multitarget-Multisensor Tracing: Priciples and Techniques. YBS, 1995. 2 C. Casella and C.P. Robert. Rao-Blacwellisation of sampling schemes. Biometrica, 83(1):81 94, 1996. 3 A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 21. 4 Y.C. Ho and R.C.K. Lee. A Bayesian approach to problems in stochastic estimation and control. IEEE Transactions on Automatic Control, AC(9):333 339, 1964. 5 S. Julier. Reduced sigma point filters for the propagation of means and covariances through nonlinear transforms. Preprint submitted to Elsevier Preprint, 1998. 6 S. Julier. The scaled unscented transformation. To appear in Automatica, 2.

7 S. Julier and J.K. Uhlmann. A new extension of the Kalman filter to nonlinear systems. SPIE Proceedings of AeroSense: Aerospace/Defence Sensing, Simulation and Controls, 1997. 8 S.J. Julier and J.K. Uhlmann. A consistent, debiased method for converting between polar and cartesian coordinate systems. SPIE Proceedings of AeroSense: Acquisition, Tracing and Pointing XI, vol. 386, page 11121, 1997. 9 C.V. Kumar, R. Rajagopal N., and R. Kiran. Passive doa tracing using unscented alman filter. CRL Technical Journal Vol.3 No.3 December, 21. 1 T. Lefebvre, H. Bruynincx, and J. De Schutter. Comment on a new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Transactions on Automatic Control, 22. 11 H. Niederreiter and P.J. Shiue (eds). Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing. Springer-Verlag New Yor Inc., 1995. 12 E. A. Wan and R. van der Merwe. The unscented Kalman filter for nonlinear estimation. Proceedings of Symposium 2 on Adaptive Systems for Signal Processing, Communication and Control (AS-SPCC), 2. 13 E. A. Wan and R. van der Merwe. The Unscented Kalman Filter, to appear in Kalman Filtering and Neural Networs, Chapter 7, Edited by Simon Hayin. Wiley Publishing, 21. 14 E. A. Wan, R. van der Merwe, and A. T. Nelson. Dual esimation and the unscented transformation. Advances in Neural Information Processing Systems 12, pages 666 672, 2. 15 J.R. Van Zandt. A more robust unscented transform. SPIE Proceedings of AeroSense: Signal and Data Processing of Small Targets, 21.