A Comparison of the Extended and Unscented Kalman Filters for Discrete-Time Systems with Nondifferentiable Dynamics

Similar documents
Data Assimilation for Magnetohydrodynamics with a Zero-Divergence Constraint on the Magnetic Field

Data Assimilation Using the Global Ionosphere-Thermosphere Model

Reduced-Complexity Algorithms for Data Assimilation of Large-Scale Systems

Extended Kalman Filter Tutorial

Sliding Window Recursive Quadratic Optimization with Variable Regularization

A new unscented Kalman filter with higher order moment-matching

ROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1

Growing Window Recursive Quadratic Optimization with Variable Regularization

Data Assimilation for Dispersion Models

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

A New Nonlinear Filtering Method for Ballistic Target Tracking

in a Rao-Blackwellised Unscented Kalman Filter

Wind-field Reconstruction Using Flight Data

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Unscented Filtering for Interval-Constrained Nonlinear Systems

Riccati difference equations to non linear extended Kalman filter constraints

Recursive Estimation of Terrestrial Magnetic and Electric Potentials

Gaussian Filters for Nonlinear Filtering Problems

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Auxiliary signal design for failure detection in uncertain systems

A data-driven method for improving the correlation estimation in serial ensemble Kalman filter

Introduction to Unscented Kalman Filter

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

The Scaled Unscented Transformation

State Estimation using Moving Horizon Estimation and Particle Filtering

Nonlinear Observer Design for Dynamic Positioning

Online monitoring of MPC disturbance models using closed-loop data

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation

Dual Estimation and the Unscented Transformation

UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION

Comparison of Kalman Filter Estimation Approaches for State Space Models with Nonlinear Measurements

Nonlinear State Estimation! Particle, Sigma-Points Filters!

Stabilization with Disturbance Attenuation over a Gaussian Channel

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

A Note on the Particle Filter with Posterior Gaussian Resampling

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Conditions for Suboptimal Filter Stability in SLAM

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

State Estimation of Linear and Nonlinear Dynamic Systems

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Quadratic Extended Filtering in Nonlinear Systems with Uncertain Observations

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Local Ensemble Transform Kalman Filter

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

STOCHASTIC STABILITY OF EXTENDED FILTERING FOR NONLINEAR SYSTEMS WITH MEASUREMENT PACKET LOSSES

Constrained State Estimation Using the Unscented Kalman Filter

Unscented Transformation of Vehicle States in SLAM

Nonlinear State Estimation! Extended Kalman Filters!

Data assimilation in high dimensions

Data assimilation in high dimensions

Unscented Kalman Filter/Smoother for a CBRN Puff-Based Dispersion Model

Estimation of white-box model parameters via artificial data generation: a two-stage approach

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

Data Assimilation in Variable Dimension Dispersion Models using Particle Filters

Data assimilation with and without a model

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1

A STUDY ON THE STATE ESTIMATION OF NONLINEAR ELECTRIC CIRCUITS BY UNSCENTED KALMAN FILTER

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

Extension of the Sparse Grid Quadrature Filter

State Estimation of Linear and Nonlinear Dynamic Systems

The Unscented Particle Filter

SQUARE-ROOT CUBATURE-QUADRATURE KALMAN FILTER

A Theoretical Overview on Kalman Filtering

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

Stochastic Tube MPC with State Estimation

On the Representation and Estimation of Spatial Uncertainty

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations

Partitioned Covariance Intersection

1 Kalman Filter Introduction

Ergodicity in data assimilation methods

14 th IFAC Symposium on System Identification, Newcastle, Australia, 2006

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Least Squares Estimation Namrata Vaswani,

Bayesian Inverse problem, Data assimilation and Localization

Filtering for Linear Systems with Error Variance Constraints

A new Hierarchical Bayes approach to ensemble-variational data assimilation

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Least Squares and Kalman Filtering Questions: me,

Sigma Point Belief Propagation

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

(Extended) Kalman Filter

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games

The nonsmooth Newton method on Riemannian manifolds

TSRT14: Sensor Fusion Lecture 6. Kalman Filter (KF) Le 6: Kalman filter (KF), approximations (EKF, UKF) Lecture 5: summary

14 th IFAC Symposium on System Identification, Newcastle, Australia, 2006

A variance limiting Kalman filter for data assimilation: I. Sparse observational grids II. Model error

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science.

An Introduction to the Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter

Ensemble square-root filters

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS

Transcription:

Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New Yor City, USA, July -3, 27 FrA7.2 A Comparison of the Extended and Unscented Kalman Filters for Discrete-Time Systems with Nondifferentiable Dynamics J. Chandrasear, A. J. Ridley, and D. S. Bernstein Abstract We compare the performance of the extended Kalman filter, the unscented Kalman filter, and two extensions of the H filter when applied to discrete-time nonlinear state estimation problems with nondifferentiable dynamics. We compare the performance of all the estimation techniques on simple nonlinear examples and finally consider state estimation of one-dimensional hydrodynamic flow based on a finite volume model that contains nondifferentiable nonlinearities. I. INTRODUCTION Because of the widespread need for nonlinear observers and estimators, this area of research remains one of the most active [, 2]. One of the main drivers of research in this area is applications to distributed, large scale systems, the most visible of which is weather forecasting [3]. This area is often referred to as data assimilation. The classical Kalman filter for linear systems is often applied to nonlinear systems in the form of the extended Kalman filter () [4, 5]. A variation of is the statedependent Riccati equation (SDRE) approach, in which, in place of the Jacobians, the dynamics and output map are exactly factored, and the factors are used for the pseudocovariance update [6, 7]. Another approach to state estimation of linear systems are the H filters [8]. Unlie the classical Kalman filter, these filters do not require the stringent Gaussian distribution assumption on the process and sensor noise affecting the system, and guarantee a performance bound. Estimation with uncertainty in the model has also been performed using the H filter [9]. We apply the H filter to nonlinear systems using the Jacobians of the dynamics and measurement maps and call the resulting filter the extended H filter (). Yet another approach to nonlinear estimation involves particle filters. Among the various techniques that have been developed are the unscented Kalman filter () [, ], which deterministically constructs the collection of state estimates. Although particle filters do not require the propagation of a covariance (or pseudo-covariance) in the usual (Riccati) way, the size of the collection determines the computational requirements. Finally, we combine the H filter gain expression with the particle filter framewor to obtain the unscented H filter (). The present paper focuses on discrete-time systems with dynamics that are not differentiable. The main motivation is state estimation based on computational fluid dynamics This research was supported the National Science Foundation, through Grant ATM-325332 to the University of Michigan, Ann Arbor, USA. The authors are with the University of Michigan, Ann Arbor, MI-489, dsbaero@umich.edu. (CFD) models for space weather forecasting [2]. In particular we focus on CFD models for hydrodynamics (HD) and magnetohydrodynamics (MHD) in which the equations of fluid motion are approximated finite volume schemes. In [6] we have considered SDRE and methods for state estimation of one-dimensional hydrodynamic flow. In HD and MHD, the CFD models involve nondifferentiable functions as part of the discretization of the underlying partial differential equations [3]. In the present paper, we consider an alternative approach in which we apply and despite the lac of differentiability. In particular, we compute the Jacobian at all points at which it exists, and we employ an averaged value at points at which the dynamics are not differentiable. To demonstrate the accuracy of,,, and when the dynamics are not differentiable, we consider several examples. We are interested in both the accuracy and computational requirements of each approach. II. THE KALMAN FILTER Consider the discrete-time linear system with dynamics and measurements x + = A x + B u + w (2.) y = C x + v, (2.2) where x R n, u R m, and y R p. The input u and output y is assumed to be measured, and w R n and v R p are uncorrelated zero-mean white noise processes with covariances Q and R, respectively. We assume that R is positive definite. For the system (2.) and (2.2), the Kalman filter provides optimal estimates of the state x using measurements y [4]. The Kalman filter equations can be expressed in two steps, namely, the data assimilation step K = P f C T (R f ), (2.3) P da = P f P f CT (Rf ) C P f, (2.4) = x f + K ( y y f ), (data update) (2.5) y f = C x f, (2.6) where R f C P f CT + R, and the forecast step x f + = A + B u, (physics update) (2.7) P+ f = A P da + Q, (2.8) where the data assimilation error covariance P da R n n and the forecast error covariance P f Rn n are defined -4244-989-6/7/$25. 27 IEEE. 443

FrA7.2 P f E [ e f (ef )T], P da E [ e da (eda )T], and the data assimilation error state e da and forecast error state e f are defined e da x e f x x f. Note that the Kalman filter gain K in (2.3) minimizes the cost function J (K ) = tr(p+ f ). III. THE H FILTER Consider the cost function N J(K ) = (ef i )T Me f i (e f )T P fef + N wt i Qw i + N vt i Rv.(3.) i The H filter ensures that inspite of the worst possible process and sensor noise, the cost J(K ) satisfies J(K ) γ. (3.2) The data assimilation step of the robust H filter is given where and = x f + K (y y), f (3.3) y f = Cx f, (3.4) P da = (I K C) P f (I K C) T + K RK T,(3.5) K = P f CT (C P f CT + R) (3.6) P f P f (I γmp f ). (3.7) The forecast step of the H filter is given x f + P f + = A, (3.8) = A P da + Q. (3.9) Note that unlie the Kalman filter, w and v need not be white noise processes and hence Q and R are not their covariances, but a weighting on the uncertainty associated with the process and sensor noise. Moreover, P f da and P in (3.3)-(3.9) are not the error covariances. IV. THE EXTENDED KALMAN FILTER Next, we consider the discrete-time nonlinear system with dynamics and measurements x + = f(x, u, ) + w (4.) The two-step is given y = h(x, ) + v. (4.2) x f + = f(xda, u, ), (4.3) = x f + K (y y), f (4.4) y f = h(xf, ), (4.5) where K, P da and P f are given (2.3), (2.4) and (2.8), respectively, with f(x, u, ) h(x, ) A, C.(4.6) x x x=,u=u x= If f(x, u, ) and h(x, ) are not differentiable with respect to x, in (4.3)-(4.6) cannot be used because A and C defined in (4.6) may not exist for all. However, we assume that the first order symmetric partial derivatives [6] of f(x, u, ) and h(x, ) exist everywhere, that is, for all x R n, and sf(ξ, u, ) sξ i sh(ξ, ) sξ i ξ=x lim δ f(x + δe i, u, ) f(x δe i, u, ) 2δ ξ=x lim δ h(x + δe i, ) h(x δe i, ) 2δ (4.7) (4.8) exist, where ξ R n has scalar entries ξ = [ ξ ξ n ] T and ei R n is the ith column of the n n identity matrix. Hence, for example, although f(x) = x does not have a derivative at x =, it follows from (4.7) that sf sx () =. Furthermore, if g : Rn R is a differentiable function, then the symmetric partial derivative and the partial derivative are equal. Next, we define the (i, j) entry of the averaged Jacobian F s (x, u, ) R n n and H s (x, ) R p n of f( ) and h( ), respectively, sfi(ξ, u, ) shi(ξ, ) F s,i,j(x, u, ), H s,i,j(x, ), (4.9) sξ j ξ=x sξ j ξ=x where f i (x, u, ) and h i (x, ) are the scalar entries of f(x, u, ) R n and h(x, ) R p, respectively. Note that if f( ) and h( ) are differentiable, then, for all x R n, the averaged Jacobians F s and H s are equal to the true Jacobians. Hence, for (4.)-(4.2) when f( ) and h( ) satisfy (4.7) and (4.8) is given (4.3), where K, P da and P f are given (2.3), (2.4) and (2.8), respectively, with A = F s (, u, ), C = H s (, ). (4.) V. THE EXTENDED H FILTER An alternative approach to state estimation of (4.)-(4.2) is based on the H filter. Although, the H filter is derived for linear time-invariant systems, lie the extended Kalman filter, the Jacobian of the dynamics and measurements maps can be used in the filter equations. However, the performance bounds guaranteed in the linear case are not valid anymore. The extended H filter () is given (4.3)-(4.5), where K, P da and P f are given (3.5)-(3.7) and (3.9), with A and C defined (4.). Note that since the Jacobians are based on the symmetric derivatives, that uses the averaged Jacobians can be used on nonlinear systems with nondifferentiable dynamics. Finally, we use γ, Q and R in as tuning parameters to improve the estimates. Note that may not be stable for all values of γ and hence γ must be tuned carefully. VI. THE UNSCENTED KALMAN FILTER Another approach to state estimation of nonlinear systems is the unscented Kalman filter (). The starting point for is a set of sample points, that is, a collection of state estimates that capture the initial probability distribution of the state []. 4432

FrA7.2 Assume that x R n, P R n n is positive semidefinite and λ >. The unscented transformation is used to obtain 2n + sample points X i R n and corresponding weights γ x,i and γ P,i, for i =,...,2n, so that the weighted mean and the weighted variance of the sample points is x and P, respectively. The unscented transformation X = Ψ(x, P, λ) of x with covariance P is defined x, if i =, X i = x + P i, if i =,...,n, (6.) x P i n, if i = n +,...,2n, where P ( λp ) /2, for i =,..., n, Pi is the ith column of P, X R n 2n+ has entries X = [ ] X X 2n and λ determines the spread of the sample points around x. Note that 2n γ x,ix i = x, and 2n γ P,i(X i x)(x i x) T = P, where the weights are defined γ x, λ n λ, γ P, λ n λ + ( λ n + β), where β >, and for i =,...,2n, γ x,i = γ P,i 2λ. The analysis step of the unscented Kalman filter is given where = xf + K (y y f ), (6.2) y f = h(x f, ), (6.3) X da, P da, λ), (6.4) P da = P f K P yy, K T, (6.5) K = P xy, P yy,, (6.6) P xy, = P yy, = γ P,i (X f i, xf )(Y f i, yf )T, (6.7) γ P,i (Y f i, y f )(Y f i, y f ) T + R, (6.8) Y f i, = h(x f i,, ), (6.9) and the forecast step of the unscented Kalman filter is given ˆX i,+ f = f(xda i,, ), (6.) x f + = γ x,i ˆXf i,+, (6.) P+ f = γ P,i ( ˆX i,+ f xf + )( ˆX i,+ f xf + )T +Q, (6.2) X f + = Ψ(x f +, P f +, λ). (6.3) Since involves 2n + model update, the computational burden of is of the order (2n+)n 2 = 2n 3 +n 2. On the other hand, involves a single model update and covariance propagation using the Riccati equation and hence the computational burden of is of the order n 3 + n 2. Hence, when n is large the computational burden of is approximately twice that of. The performance of and are compared in []. VII. THE UNSCENTED H FILTER Finally, we consider an extension of that is based on the H filter. The analysis step of the unscented H filter () is given (6.2)-(6.4) with where P da = P f K Pyy, K T, (7.) K = P xy, P yy,, (7.2) P xy, = P yy, = γ P,i ( X f i, x f )(Ỹ f i, y f ) T, (7.3) γ P,i (Ỹ f i, y f )(Ỹ f i, y f ) T + R, (7.4) Ỹi, f = h( X i, f, ), (7.5) and the forecast step of is given (6.)-(6.2), Xf, is obtained using X f + = Ψ(x f +, P f +, λ), (7.6) and P f is defined (3.7). Note that when the dynamics are linear, then is equivalent to presented in Section 3. VIII. EXAMPLES Next, we use,,, and for state estimation of low-dimensional discrete-time systems with nondifferentiable nonlinearities. Specifically, we consider nonlinearities that are not differentiable but have symmetric derivatives everywhere. A. Absolute Value Function First, we consider nonlinearities that commonly occur in finite volume discretization of hyperbolic partial differential equations [3]. For example, the absolute value function appears in the first-order upwind discretization of an advection equation [3]. Let x R 4 and x + = abs(sin(mx )) + w, y = Cx + v, where M R 4 4 and [ C = (8.) ], (8.2) and w and v are zero-mean white processes with covariances Q =.I 4 and R =.I 2, respectively. Note that for all x R, 8 ><, if x >, sabs(ξ) =, if x <, sξ ξ=x >:, if x =. (8.3) Hence, it follows from (4.9), (8.) and (8.3) that for i, j =,...,n, the (i, j) entry of F s (x) is given 8 >< cos(row i(m)x)m i,j, if sin(row i(m)x) >, F s,i,j(x)= cos(row i(m)x)m i,j,if sin(row i(m)x) <, >: 4, if sin(row i(m)x) =, where row i (M) is the ith row of M, and H s (x) = C. (8.4) 4433

FrA7.2 Figure shows a plot of abs(sin(mx)) and it can be seen that as m increases, the nonlinearities become more prominent. The logarithm of the sum of the Euclidean norms of the errors in the state estimates for 5 different choices of M with sprad(m) =.5 is shown in Figure 2. Numerical simulations suggest that the performance of,,, and is almost indistinguishable for all choices of M. The error in the state estimates when no data assimilation is performed, that is, K = for in, is also plotted for comparison. Next, the performance of all the estimators for 5 different choices of M with sprad(m) = is shown in Figure 3. It can be seen that in the case of more severe nonlinearities, the performance of and is better than the performance of and. The values of γ in all the cases were chosen such that and are both stable. B. Minmod Function Next, we consider discrete-time systems involving the minmod function, which is used in second-order upwind finite volume schemes as a slope limiter to reduce the diffusion effects [3]. For α, β R, define minmod(α, β) = (sign(β) + sign(β)) min{ α, β } (8.5) 2 (see Figure 4). Let x R and x + =sin(mx )+minmod(m L x, M R x ) + w, y = Cx + v. (8.6) We choose M R so that sprad(m) <, and for i, j =,...,, the (i, j) entry of M L R is given (M L) i,i =, (M L) i,i =, (M L) i,j = if j / {i, i }, (8.7) M R = ML T, and for all, C R 2 is chosen to be [ ] C = 9. (8.8) 9 We assume that w and v are zero-mean white processes with covariances Q =.I and R =.I 2, respectively. Note that for all u, v R, 8, if uv < or u = v =,, if uv > and u > v, >< s minmod(α,, if u, v =, sα β) (u,v) =.5, if uv > and u = v,.5, if u =, v, >:, if uv > and u < v. (8.9) Furthermore, (8.6) implies that H s (x) = C. The sum of the Euclidean norm of the error in the state estimates obtained from,,, and for 5 different choices of M with sprad(m) =.5, is shown in Figure 5. The performance of the four estimators for 5 different choices of M with sprad(m) =. is shown in Figure 6. IX. SIMULATION EXAMPLE : ONE-DIMENSIONAL HYDRODYNAMICS Finally, we consider state estimation of one-dimensional hydrodynamic flow based on a finite volume model. The flow of an inviscid, compressible fluid along a one-dimensional channel is governed Euler s equations. A discrete-time model of hydrodynamic flow can be obtained using a finite-volume based spatial and temporal discretization. Assume that the channel consists of n identical cells. For all i =...,n, define U [i] R 3 U [i] = [ ] ρ [i] T, m [i] E [i] where ρ [i] is the density, m [i] is the the momentum and E [i] is the energy in the ith cell. We use a second-order Rusanov scheme [3] to discretize Euler s equations and obtain a discrete-time model that enables us to update the flow variables at the center of each cell. Define the state vector x R 3(n 4) [ x = (U [3] )T (U [n 2] ) T ] T. (9.) For all, let u R 3 denote the boundary condition for the first two cells, so that u = (U [] )T = (U [2] )T. Furthermore, we assume Neumann boundary conditions at cells with indices n and n. The second-order Rusanov scheme yields a nonlinear discrete-time update model of the form (4.), where w R 3(n 4) represents unmodeled drivers and is assumed to be zero-mean white Gaussian process noise with covariance matrix Q R 3(n 4) 3(n 4) such that only the flow variables in the th, 25th and 4th cell are directly affected w. We assume that measurement y R 6 of density, momentum and energy at cells with indices 6, 6, 26, 35, and 42 are available and and v is zero-mean white Gaussian noise with covariance matrix R =.I 5 5. Let n = 54 so that x R 5. For all, let [] = [2] = g/m 3, m [] = m [2] = v in + vin 4 sin() m/s, and SE [] = E [2] = (/2)(m [] )2/ [] + 3/2 N/m2, where v in is the inlet velocity. We simulate the truth model from an arbitrary initial condition x R 3(n 4) and obtain measurements y for various choices of v in {.,., 2.,...,.} m/s. Note that if v in >.29 m/s, then the flow is supersonic. The objective is to estimate the density, momentum, and energy at the cells where measurements of flow variables are unavailable using and. It follows from (3.7) that and involve inverting a n n matrix which is computationally intensive when n is large which is the case in finite volume discretization of partial differential equations. Moreover, in the previous examples, no significant improvement in performance was noticed when the and were used instead of and, respectively. Hence, we do not use or for state estimation in the one-dimensional hydrodynamic flow example. The error in the estimates of the energy E [3] in cell 3, when measurements y are used in and with v in = m/s is shown in Figure 7. The error in estimates of the energy E [3] in cell 3, when v in = m/s is shown in Figure 8. The sum of the Euclidean norm of error in the state 4434

FrA7.2 estimates for different values of v in is shown in Figure 9. Note that at low inlet velocities v in, the performance of and is very similar. However, at higher inlet velocities, the nonlinearities are more severe and the performance of is better than that of. X. CONCLUSION In this paper we compared the performance of the extended Kalman filter, the extended H filter, the unscented Kalman filter, and the unscented H filter for nonlinear systems with nondifferentiable nonlinearities. Whenever the Jacobian fails to exist, we use an averaged Jacobian based on the symmetric derivatives in the extended Kalman filter. For all the examples that we considered, whenever the nonlinearities are not severe, the performance of with the averaged Jacobian and is similar. However, when the nonlinearities become severe, performs better that. No significant improvement in the performance was noticed when either the extended H filter or the unscented H filter was used over the extended Kalman filter and unscented Kalman filter, respectively. REFERENCES [] M. Athans, R. P. Wishner, and A. Bertolini, Suboptimal State Estimation for Continuous-Time Nonlinear Systems from Discrete Noisy Measurements, IEEE Trans. Auto. Ctrl., vol. 3, pp. 54 54, 968. [2] K. Ito and K. Xiong, Gaussian Filters for Nonlinear Filtering Problems, IEEE Trans. Auto. Ctrl., vol. 45, pp. 9 927, 2. [3] J. M. Lewis, S. Lashmivarahan, and S. Dhall, Dynamic Data Assimilation : A Least Squares Approach. Cambridge University Press, 26. [4] A. Jazwinsi, Stochastic Processes and Filtering Theory. Academic Press, 97. [5] A. Gelb, Applied Optimal Estimation. Cambridge: MIT Press, 974. [6] J. Chandrasear, A. J. Ridley, and D. S. Bernstein, A SDRE-Based Asymptotic Observer for Nonlinear Discrete-Time Systems, in Proc. Amer. Contr. Conf., Portland, OR, June 25, pp. 363 3635. [7] C. P. Mrace, J. R. Cloutier, and C. A. D Souza, A New Technique for Nonlinear Estimation, in Proc. Int. Conf. Contr. App., Dearborn, MI, June 996, pp. 338 343. [8] W. Sun, K. M. Nagpal, and P. P. Khargonear, H Control and Filtering for Sampled-Data Systems, IEEE Trans. Auto. Ctrl., vol. 38, pp. 62 75, 993. [9] E. G. Collins Jr. and T. Song, Robust H Estimation and Fault Detection of Uncertain Dynamic Systems, Int. J. Guid. Cont. Dyn., vol. 23, no. 5, pp. 857 864, 2. [] S. Julier, J. Uhlmann, and H. F. Durrant-Whyte, A New Method for the Nonlinear Transformation of Means and Covariances in Filters and Estimators, IEEE Trans. Auto. Ctrl., vol. 45, pp. 477 482, 2. [] R. V. der Merwe and E. A. Wan, The Square-root Unscented Kalman Filter for State and Parameter-Estimation, in Proc. Int. Conf. Acou. Speech Sig. Process., May 2, pp. 346 3464. [2] C. Groth, D. D. Zeeuw, T. Gombosi, and K. Powell, Global 3D MHD Simulation of a Space Weather Event: CME Formation, Interplanetary Propagation, and Interaction with the Magnetosphere, J. Geophys. Res., vol. 5, pp. 25 53 25 78, 2. [3] C. Hirsch, Numerical Computation of Internal and External Flows. John Wiley and Sons, 99. [4] B. D. O. Anderson and J. B. Moore, Optimal Filtering, Dover Publications Inc., Mineola, NY, 979. [5] L. Scherliess, R. W. Schun, J. J. Soja, and D. C. Thompson, Development of a Physics-based Reduced State Kalman filter for the Ionosphere, Radio Science, vol. 39, 24. [6] L. Larson, The Symmetric Derivative, Trans. Amer. Math. Soc., vol. 277, pp. 589 599, 983. log (Σ e /2 ).5.5 abs(sin(mx)).5.5 m=.5 m=2 5 4 3 2 2 3 4 5 x Fig.. Plot of abs(sin(mx)) for m =.5 and m = 2..5 5 5 2 25 3 35 4 45 5 Fig. 2. Logarithm of the sum of Euclidean norms of the errors in state estimates obtained using,,, and for the system (8.). The performance is compared for 5 different choices of M with sprad(m) =.5. The error in the estimates when no data assimilation is performed, that is, K = for all in is also shown for comparison. log (Σ e /2 ).9.85.8.75.7.65 5 5 2 25 3 35 4 45 5 Fig. 3. Logarithm of the sum of the Euclidean norms of the errors in state estimates obtained using,,, and for the system (8.). The performance is compared for 5 different choices of M with sprad(m) =. The performance of and is much better than the performance of or. However, the performances of and are very similar to the performance of and, respectively. 4435

FrA7.2.5.4 Two step.3 Error in energy estimates at cell 3.2...2.3.4 Fig. 4. Plot of minmod(α, β) for 5 α, β < 5..5 5 5 2 25 3 35 4 45 5 time in s.5.95 Fig. 7. The error in the estimates of energy at cell 3 obtained using and when v in = m/s and the flow is subsonic..8 Two step log (Σ e /2 ).9.85.8 Error in energy estimates at cell 3.6.4.2.2.4.75 5 5 2 25 3 35 4 45 5 Fig. 5. Logarithm of the sum of the Euclidean norms of the errors in state estimates obtained using,,, and for the system (8.6). The performance of the four estimators are compared for different choices of M with sprad(m) =.5. We choose the largest possible γ (=.5) such that both and are stable for all choices of M. 2.5 2.5.6.8 5 5 2 25 3 35 4 45 5 time in s Fig. 8. The error in the estimates of velocity at cell 3 obtained using and when v in = m/s and the flow is supersonic with Mach number 7.75. 2 Two step log (Σ e /2 ).5 Σ e 8 6.5 4 2.5 5 5 2 25 3 35 4 45 5 Fig. 6. Logarithm of the sum of the Euclidean norms of the errors in state estimates obtained using,,, and for the system (8.6). The performance of the two estimators is compared for 5 different choices of M with sprad(m) =.. There seems to be no significant improvement in the performance when the H filters ( and ) are used over and, respectively. 2 3 4 5 6 7 8 9 v in m/s in Fig. 9. The square root of the sum of the Euclidean norms of the errors in state estimates, obtained using and for different choices of the inlet velocity v in. The performance of is better that the performance of for high inlet velocities, with a computational burden that is twice that of. 4436