Kalman Filters with Uncompensated Biases

Similar documents
RECURSIVE IMPLEMENTATIONS OF THE SCHMIDT-KALMAN CONSIDER FILTER

On Underweighting Nonlinear Measurements

Information Formulation of the UDU Kalman Filter

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

EE 565: Position, Navigation, and Timing

NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION

4 Derivations of the Discrete-Time Kalman Filter

Autonomous Mid-Course Navigation for Lunar Return

State Estimation for Nonlinear Systems using Restricted Genetic Optimization

Linear Optimal Estimation Problems in Systems with Actuator Faults

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. VIII Kalman Filters - Mohinder Singh Grewal

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

Design of Nearly Constant Velocity Track Filters for Brief Maneuvers

State estimation and the Kalman filter

Adaptive Two-Stage EKF for INS-GPS Loosely Coupled System with Unknown Fault Bias

Robust State Estimation with Sparse Outliers

Kalman Filter Enhancement for UAV Navigation

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

Adaptive Control of Space Station

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

A Crash Course on Kalman Filtering

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

The Kalman Filter ImPr Talk

Design of Adaptive Filtering Algorithm for Relative Navigation

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS

Simultaneous Localization and Map Building Using Natural features in Outdoor Environments

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

6.4 Kalman Filter Equations

FIR Filters for Stationary State Space Signal Models

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming

Norm-Constrained Kalman Filtering

Towards Real-Time Maneuver Detection: Automatic State and Dynamics Estimation with the Adaptive Optimal Control Based Estimator

MEETING ORBIT DETERMINATION REQUIREMENTS FOR A SMALL SATELLITE MISSION

Nonlinear State Estimation! Particle, Sigma-Points Filters!

Optimal control and estimation

DISTRIBUTED ORBIT DETERMINATION VIA ESTIMATION CONSENSUS

RESEARCH OBJECTIVES AND SUMMARY OF RESEARCH

A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis Of The Non-Stationary Signals

Incorporation of Time Delayed Measurements in a. Discrete-time Kalman Filter. Thomas Dall Larsen, Nils A. Andersen & Ole Ravn

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science.

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Kalman filter using the orthogonality principle

State Estimation of Linear and Nonlinear Dynamic Systems

Bézier Description of Space Trajectories

Design and Flight Performance of the Orion. Pre-Launch Navigation System

V Control Distance with Dynamics Parameter Uncertainty

NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES*

Adaptive State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018

Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer

The optimal filtering of a class of dynamic multiscale systems

1 Kalman Filter Introduction

Recursive On-orbit Calibration of Star Sensors

Nonlinear State Estimation! Extended Kalman Filters!

State Estimation using Moving Horizon Estimation and Particle Filtering

Extension of the Sparse Grid Quadrature Filter

Spacecraft Orbit Anomaly Representation Using Thrust-Fourier-Coefficients with Orbit Determination Toolbox

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

The field of estimation addresses the question of how to form the best possible estimates of a system s

CS 532: 3D Computer Vision 6 th Set of Notes

Attitude Determination for NPS Three-Axis Spacecraft Simulator

Fuzzy Adaptive Kalman Filtering for INS/GPS Data Fusion

Dynamic measurement: application of system identification in metrology

Absolute Navigation Performance of the Orion. Exploration Flight Test 1

EE 570: Location and Navigation

UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION

Constrained State Estimation Using the Unscented Kalman Filter

Thrust acceleration estimation using an on-line non-linear recursive least squares algorithm

9 Multi-Model State Estimation

(Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts

Multi-agent Kalman Consensus with Relative Uncertainty

A Note on the Particle Filter with Posterior Gaussian Resampling

Further Results on Model Structure Validation for Closed Loop System Identification

THE well known Kalman filter [1], [2] is an optimal

A Study of Covariances within Basic and Extended Kalman Filters

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Dynamic estimation methods compute two main outputs: Estimates of a system state vector and some

Online monitoring of MPC disturbance models using closed-loop data

RECURSIVE ESTIMATION AND KALMAN FILTERING

PERIODIC KALMAN FILTER: STEADY STATE FROM THE BEGINNING

A Theoretical Overview on Kalman Filtering

The Kalman filter. Chapter 6

Robust Adaptive Estimators for Nonlinear Systems

Computer Problem 1: SIE Guidance, Navigation, and Control

Sequential Estimation in Linear Systems with Multiple Time Delays

Time Varying Optimal Control with Packet Losses.

Rao-Blackwellized Particle Filter for Multiple Target Tracking

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

RADAR-OPTICAL OBSERVATION MIX

UAV Navigation: Airborne Inertial SLAM

Transcription:

Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption of the Kalman filter is that the measurement and process disturbances can be accurately modeled as random white noise. Various mitigation strategies are available when this assumption is invalid. In practice sensor errors are often modeled more accurately as the sum of a white noise component and a strongly correlated component. he correlated components can, for example, be random constant biases. In this case, a standard technique is to augment the Kalman filter state vector and estimate the random biases. In an attempt to decouple the bias estimation from the state estimation, Friedland [1] estimated the state as though the bias was not present, and then added the contribution of the bias. Friedland showed that this approach is equivalent to augmenting the state vector. his technique, known as two-stage Kalman filtering or separate-bias Kalman estimation, was then extended to incorporate a walk in the bias forced by white noise [2]. o account for the bias walk, the process noise covariance was increased heuristically, and optimality conditions were derived [3, 4]. In this work, the effects of the noise and biases are considered as sources of uncertainty and not as elements of the state vector. his approach is applicable in situations when the biases are not observable or when there is not enough information to discern the biases from the measurements. A common approach would be to tune the filter using process and measurement noise such that the sample covariance obtained through Monte Carlo analysis matches the filter state estimation error covariance. he technique presented here takes Senior Member of the echnical Staff, Vehicle Dynamics and Control, 17629 El Camino Real, Suite 470, rzanetti@draper.com, Member AIAA. Dean of Engineering, Marquette University, P.O. Box 1881, Milwaukee, WI 53201-1881, robert.bishop@marquette.edu, Fellow AIAA. 1 of 11

advantage of the structure of the biases to obtain a more precise representation of their contributions to the state estimation uncertainty. he resulting algorithms are useful in quantifying the uncertainty in a single simulation along the nominal state trajectory. his process can aid in tuning the filter as well as be employed onboard to obtain an accurate measure of the uncertainty of the state estimates. he approach taken is similar to that of the consider filter proposed by Schmidt [5]. he consider filter can be applied to our problem and the two solution approaches, although different in form, are functionally the same. he goal of this paper is to introduce the uncompensated bias Kalman filter as presented by the authors in [6] and [7]. More recently Hough [8] independently derived a similar algorithm and applied it to orbit determination. his technical note shows the relation between these two recent techniques as well as their equivalency to the Schmidt consider filter [5]. II. DISCREE KALMAN FILER WIH UNCOMPENSAED BIAS Consider the stochastic dynamical system model x k+1 = Φ k x k + Υ k b ν + J k ν k, (1 where x k R n is the state vector at time t k, Φ k R n n is the deterministic state transition matrix Φ k = Φ(t k+1, t k, and ν k R r is the process noise assumed to be a zero-mean, white noise vector sequence with E ν k = 0 k, E ν k ν j = Vk δ kj, where V k R r r, V k 0 for all k, and δ kj = 1 if k = j, and δ kj = 0 if k j. A random constant vector bias b ν R m is also considered with the assumed properties that E b ν = 0, E b ν b ν = Bν, E ν k b ν = 0 k where B ν R m m and B ν > 0. he shape matrices Υ k R n m and J k R n r are deterministic. From Eq. (1 and the fact that ν k is modeled as a zero-mean random vector sequence and b ν is modeled as a zero-mean random constant vector, an unbiased estimate of the state ˆx k 1 can be propagated forward in time to obtain an unbiased estimate at time t k via ˆx k = Φ k 1ˆx + k 1, 2 of 11

where ˆx k is the state estimate at t k before a measurement update and ˆx + k 1 is the state estimate after the measurement update at t k 1. he estimation error at t k before the measurement update is defined as e k = x k ˆx k, (2 and the estimation error at t k after the measurement update is defined as e + k = x k ˆx + k. (3 At t k, it is assumed that measurements are available in the form y k = H k x k + Λ k b η + η k, (4 where y k R p, H k is the measurement mapping matrix, η k R p is the measurement noise assumed to be a zero-mean, white noise vector sequence with E η k = 0 k, E η k η j = Rk δ kj, where R k R p p, R k > 0 for all k. A random constant vector bias b η R q is also considered with the assumed properties that E b η = 0, E b η b η = Bη, E η k b η = 0 k where B η R q q and B η > 0. he shape matrix Λ k R n q is deterministic. We also assume that E b ν b η = 0, E ηj ν k = 0, E νk b η = 0, and E ηk b ν = 0 for all k, j. he propagated estimation error after the measurement update at t k 1 to before the next measurement update at t k is and the associated covariance propagation, P k = E e k = Φ k 1e + k 1 + Υ k 1b ν + J k 1 ν k 1, (5 e k e k, is given by P k = Φ k 1P + k 1 Φ k 1 + Υ k 1 B ν Υ k 1 + J k 1 V k 1 J k 1 + Φ k 1 E e + k 1 b ν Υ k 1 + Υ k 1 E b ν (e + k 1 Φ k 1, (6 3 of 11

where we assume that ν k 1 is uncorrelated to b ν and e + k 1. he state update at t k is assumed to be the linear update ˆx + k = ˆx k + K k(y k ŷ k, (7 where ŷ k = H kˆx k, and ŷ k follows from the measurement model in Eq. (4 and the fact that η k is modeled as a zero-mean random vector sequence and b η is modeled as a zero-mean random constant vector. he update in Eq. (7 provides an unbiased a posteriori estimate when the a priori estimate is unbiased. With the estimation error at t k after the measurement update defined as in Eq. (3, we obtain the estimation error after the update as e + k = (I K kh k e k K kλ k b η K k η k. (8 he update of the state estimation error covariance, P + k = E e + k e+ k, is given by P + k = (I K kh k P k (I K kh k + K k Λ k B η Λ k K k + K k R k K k + (9 (I K k H k E e k b η Λ k K k K k Λ k E b η (e k (I K k H k, where we assume that η k is uncorrelated to b η and e k. Substituting Eq. (5 into Eq. (8 yields e + k = (I K kh k ( Φ k 1 e + k 1 + Υ k 1b ν + J k 1 ν k 1 Kk Λ k b η K k η k. Forming e + k b ν and taking the expectation yields E e + k b ν = (I Kk H k [ Φ k 1 E e + k 1 b ν + Υk 1 B ν ]. (10 where we assume that b ν is uncorrelated to b η, ν k 1, and η k, k. Factor E e + k 1 b ν as E e + k 1 b ν = Lk 1 B ν, (11 where we note that B ν > 0. Substituting Eq. (11 into Eq. (6 yields P k = Φ k 1P + k 1 Φ k 1 + Υ k 1 B ν Υ k 1 + J k 1 V k 1 J k 1 + Φ k 1 L k 1 B ν Υ k 1 + Υ k 1 L k 1 B ν Φ k 1, (12 4 of 11

where L k is found via recursion. Substituting Eq. (11 into Eq. (10 yields E e + k b ν = (I Kk H k [Φ k 1 L k 1 + Υ k 1 ] B ν = L k B ν. Recalling that B ν > 0, we obtain the recursion for L k as L k = (I K k H k (Φ k 1 L k 1 + Υ k 1. (13 he initial condition, L 0, for the L k recursion is found by considering the state estimation error after the first update, or e + 1 = (I K 1 H 1 (Φ 0 e 0 + Υ 0 b ν + J 0 ν 0 K 1 Λ 1 b η K 1 η 1. Computing e + 1 b ν and taking the expectation yields E e + 1 b ν = (I K1 H 1 Υ 0 B ν, with the assumption that b ν is not correlated with the initial state estimation error, e 0. We find that L 1 = (I K 1 H 1 Υ 0, which can be obtained using the recursion of Eq. (13 for k = 1 if we set L 0 = 0. herefore, we start the recursion for L k with L 0 = 0. he estimation error at t k+1 before the measurement update is found from Eq. (5 to be e k+1 = Φ ke + k + Υ kb ν + J k ν k. (14 Substituting Eq. (8 into Eq. (14 yields the recurrence relation e k+1 = Φ [ k (I Kk H k e k K ] k η k K k Λ k b η + Υk b ν + J k ν k. Forming e k+1 b η and taking the expectation, it follows that E e k+1 b η = Φk (I K k H k E e k b η Φk K k Λ k B η, (15 where we assume that b η is uncorrelated to b ν, ν k, and η k, k. Factoring E e k b η as E e k b η = Mk B η, (16 5 of 11

it follows that E e k+1 b η = Mk+1 B η, (17 and using Eq. (15 (17, the matrix M k can be found recursively as M k+1 = Φ k [(I K k H k M k K k Λ k ]. (18 If, at the initial time, a propagation occurs such that e 1 = Φ 0 e 0 + Υ 0 b ν + J 0 ν 0, with the assumption that b η is not correlated with the initial state estimation error, e 0, it follows from Eq. (16 that E e 1 b η = 0 which in turn implies that M1 = 0 since B η > 0. We assume here that a propagation occurs at t 0 before the first measurement update at t 1. Hence, we require the starting values L 0 = 0 and M 1 = 0. If an update occurs at time t 0 before the first propagation, the same algorithm can be used by setting M 0 = 0 and L 0 = 0. Substituting Eq. (16 into Eq. (9, after some rearrangement, we obtain P + k = P k K k( Hk P k + Λ kb η M k ( P k H k + M k B η Λ k K k + + K k ( Hk P k H k + R k + Λ k B η Λ k + H k M k B η Λ k + Λ k B η M k H k K k. (19 aking the derivative of the trace of P + k solving for K k yields the optimal gain, with respect to K k, setting the result to zero, and K k = ( P k H k + M k B η Λ k where the matrix W k is the covariance of the residuals given by W 1 k, (20 W k = E (y ŷ (y ŷ = H k P k H k + R k + Λ k B η Λ k + H k M k B η Λ k + Λ k B η M k H k. Substituting Eq. (20 into Eq. (19 yields P + k = P k K kw k K k. he discrete uncompensated bias algorithm is summarized in able 1. he uncompensated bias algorithm for the continuous time models of the measurements and dynamics is presented in [7]. 6 of 11

System Model Measurement Model x k+1 = Φ k x k + Υ k b ν + J k ν k E ν k = O, E ν k ν j = Vk δ kj, k, j E b ν = O, E b ν b ν = Bν > O y k = H k x k + Λ k b η + η k E η k = O, E η k η j = Rk δ kj, k, j E b η = O, E b η b η = Bη > O Assumptions E ν k b ν = O, E ηk b η = O, E ηk b ν = O, k E b ν b η = O, E νk b η = O, E ηk ν j = O, k, j Initial Conditions ˆx + 0 = E x(t 0, P + 0 = E e 0 e 0, e0 = x(t 0 ˆx + 0 Recursion Initialization M 1 = O, L 0 = O State Propagation ˆx k = Φ k 1ˆx + k 1, k = 1, 2, Covariance Propagation P k = Φ k 1P + k 1 Φ k 1 + Υ k 1 B ν Υ k 1 + J k 1 V k 1 J k 1 + +Φ k 1 L k 1 B ν Υ k 1 + Υ k 1 B ν L k 1 Φ k 1 Gain Calculation K k = ( P k H k + M k B η Λ k W 1 k W k = H k P k H k + R k + Λ k B η Λ k + H k M k B η Λ k + Λ k B η M k H k State Update ˆx + k = ˆx k + K k(y k H kˆx k Covariance Update P + k = P k K kw k K k L Calculation L k = (I K k H k (Φ k 1 L k 1 + Υ k 1 M Calculation M k+1 = Φ k [(I K k H k M k K k Λ k ] able 1. Discrete-time Kalman filter with uncompensated bias. III. Equivalency with Bias Characterization Filter In this section we show that the bias characterization filter algorithm presented by Hough [8] is a subset of the discrete uncompensated bias algorithm previously presented by the authors [6, 7]. he bias characterization filter incorporates only biases in the discrete measurements. In our notation, this is equivalent to stating that B ν = O. In the derivation by Hough, the estimation error δˆx is defined with the opposite sign of the estimation errors in Eq. (2 and (3, or δˆx k = ˆx k x k = e k. (21 Other notation equivalencies include the correlation matrix S k M k B η, measurement mapping matrix C k H k, the bias shaping matrix M k Λ k, and the prior correlation 7 of 11

matrix is already propagated to the current measurement time, such that the state transition matrix in Eq. (18 reduces to the identity matrix, or I Φ k. With the above changes our discrete uncompensated bias filter can be rewritten as K k = L k N 1 k (22 L k = C k P k M k(s k N k = H k P k H k + M k B η M k + R k H k S k M k M k (S k H k P + k = P k K kl k L k K k + K k N k K k S + = (I K k H k S k + K km k B k, which are the equations presented by Hough [8]. he next section shows the mathematical equivalency between our uncompensated bias filter and the consider filter. Since the bias compensation filter [8] is a subset of the uncompensated bias filter, this implies that the bias compensation filter is also equivalent to the consider filter. IV. Equivalency with Consider Filter Slightly different versions of the consider filter exist. In this work we refer to the consider filter developed by Schmidt and presented by Jazwinski [5]. Under the same modeling assumptions of Section II, assume that an augmented state z k is created at t k by z k = x k b ν b η. (23 Denoting the augmented state estimation error at t k after the measurement update as e + z,k = z k ẑ + k, we have the augmented state error covariance matrix at t k after the measurement update given by where the nonzero submatrices are E e + k e+ k Z + k = E e + z,k e+ z,k = P + k, E e + k b ν = Lk B ν E e + k b η = [(I Kk H k M k K k Λ k ] B η (24 E b ν b ν = Bν, E b η b η = Bη Denote the augmented state estimation error at t k+1 before the measurement update as 8 of 11

e z,k+1 = z k+1 ẑ k+1 and the associated augmented state error covariance matrix as Z k+1 = E e z,k+1 e z,k+1. he augmented state estimation error covariance is propagated from t k to t k+1 via Z k+1 = Ψ kz + k Ψ k + W k (25 where the state transition matrix and process noise are given by Φ k Υ k O J k V k J k O O Ψ k = O I O, W k = O O O. O O I O O O Computing Z k+1 in Eq. (25 yields the following relationships E e k+1 e k+1 = Φ k P + k Φ k + Φ k L k B ν Υ k + Υ k B ν L k Φ k + Υ k B ν Υ k + J k V k J k (26 E e k+1 b ν = [Φk L k + Υ k ] B ν, E e k+1 b η = Φk [(I K k H k M k K k Λ k ] B η We know that P k+1 = E e k+1 e k+1 and from Eq. (17 that E e k+1 b η = Mk+1 B η, hence from Eq. (26 it follows that P k+1 = Φ kp + k Φ k + Φ k L k B ν Υ k + Υ k B ν L k Φ k + Υ k B ν Υ k + J k V k J k M k+1 = Φ k [(I K k H k M k K k Λ k ] which are the same as Eq. (12 and Eq. (18. When a measurement becomes available at t k+1 the augmented measurement mapping matrix is the residual covariance is given by Y k+1 = [H k+1 O Λ k+1 ], W k+1 = Y k+1 Z k+1 Y k+1 + R k+1 = H k+1 P k+1 H k+1 + Λ k+1 B η M k+1h k+1 + H k+1 M k+1 B η Λ k+1 + Λ k+1 B η Λ k+1 + R k+1, he consider gain K k+1 is one in which the rows corresponding to the consider states are 9 of 11

zero [9] K k+1 = K k+1 O O. (27 he Schmidt-Kalman gain is obtained by minimizing the trace of P + k+1 (and equivalently of Z + k+1 out of all possible choices of consider gains in Eq. (27. he Schmidt-Kalman gain is given by P k+1 H k+1 + M k+1b η Λ k+1 K k+1 = O O W 1 k+1, (28 where the non-zero rows end up being equal to the corresponding portion of the globally optimal Kalman gain via K opt k+1 = P k+1 H k+1 + M k+1b η Λ k+1 ( B ν L k Φ k + Υ k H k+1 B η M k+1 H k+1 + B ηλ k+1 W 1 k+1. (29 hen, computing the a posteriori augmented state error covariance matrix, Z + k+1 = E e + z,k+1 e+ z,k+1, Z + k+1 = (I K k+1y k+1 Z k+1 (I K k+1y k+1 + K k+1r k+1 K with K k+1 = (P k+1 H k+1 + M k+1b η Λ k+1w 1 k+1, P+ k+1 = E L k+1 B ν, yields P + k+1 = P k+1 K k+1( Hk+1 P k+1 + Λ k+1b η M k+1 e + k+1 e+ k+1 ( P k+1 H k+1 k+1,, and E e + k+1 b ν = + M k+1 B η Λ k+1 K k+1 + K k+1 ( Hk+1 P k+1 H k+1 + R k+1 + Λ k+1 B η Λ k+1 L k+1 = (I K k+1 H k+1 (Φ k L k + Υ k + H k+1 M k+1 B η Λ k+1 + Λ k+1 B η M k+1h k+1 K k+1. which are the same as Eq. (19 and Eq. (13. he fundamental relationships presented in able 1 are replicated by the consider filter when we set the initial conditions as M 0 = 0, L 0 = 0, and K 0 = 0. 10 of 11

V. Conclusions In this work, the algorithms for precise navigation are derived to include uncompensated bias terms in both the process and measurement noise. he proposed algorithm treats the biases as error terms rather than states and produces the minimum variance estimator under this assumption. he proposed algorithm is compared to the well-known Schmidt-Kalman filter or consider filter. he consider filter treats the biases as states but neglects to update them when a measurement becomes available. his note shows that the two algorithms, while approaching the problem from different perspectives, are mathematically equivalent. References [1] Friedland, B., reatment of Bias in Recursive Filtering, IEEE ransactions on Automatic Control, Vol. 14, No. 4, August 1969, pp. 359 367. [2] Ignagni, M. B., Separate-Bias Kalman Estimator with Bias State Noise, IEEE ransactions on Automatic Control, Vol. 35, No. 3, March 1990, pp. 338 341. [3] Alouani, A.., Xia, P., Rice,. R., and Blair, W. D., On the Optimality of wo-stage State Estimation In the Presence of Random Bias, IEEE ransactions on Automatic Control, Vol. 38, No. 8, August 1993, pp. 1279 1282. [4] Ignagni, M. B., Optimal and Suboptimal Separate-Bias Kalman Estimator for a Stochastic Bias, IEEE ransactions on Automatic Control, Vol. 45, No. 3, March 2000, pp. 547 551. [5] Jazwinski, A. H., Stochastic Processes and Filtering heory, Vol. 64 of Mathematics in Sciences and Engineering, Academic Press, 1970, pp. 281 285. [6] Zanetti, R. and Bishop, R. H., Entry Navigation Dead-Reckoning Error Analysis: heoretical Foundations of the Discrete-ime Case, Proceedings of the AAS/AIAA Astrodynamics Specialist Conference held August 19 23, 2007, Mackinac Island, Michigan, Vol. 129 of Advances in the Astronautical Sciences, 2007, pp. 979 994, AAS 07-311. [7] Zanetti, R., Advanced Navigation Algorithms for Precision Landing, Ph.D. thesis, he University of exas at Austin, Austin,exas, December 2007. [8] Hough, M. E., Orbit Determination with Improved Covariance Fidelity, Including Sensor Measurement Biases, Journal of Guidance, Control, and Dynamics, Vol. 34, No. 3, May June 2011, pp. 903 911. [9] Bierman, G. J., Factorization Methods for Discrete Sequential Estimation, Vol. 128 of Mathematics in Sciences and Engineering, Academic Press, 1978, pp. 165 171. 11 of 11