Second-Order Divided-Difference Filter Using a Generalized Complex-Step Approximation

Size: px
Start display at page:

Download "Second-Order Divided-Difference Filter Using a Generalized Complex-Step Approximation"

Transcription

1 Second-Order Divided-Difference Filter Using a Generalized Complex-Step Approximation Kok-Lam Lai and John L. Crassidis University at Buffalo, State University of New York, Amherst, NY This paper presents a framework for the second-order divided-difference filter using a generalized complex-step derivative approximation. For first derivatives the complex-step approach does not suffer substraction cancelation errors as in standard numerical finitedifference approaches. Therefore, since an arbitrarily small step size can be chosen, the complex-step method can achieve near analytical accuracy. However, for second derivatives straight implementation of the complex-step approach does suffer from roundoff errors. Therefore, an arbitrarily small step size cannot be chosen. Previous work expanded upon the standard complex-step approach to provide a wider range of accuracy for both the first and second derivative approximations. The new extensions can allow the use of one step size to provide optimal accuracy for both derivative approximations, which are used in the divided-difference filter in order to improve its accuracy. Simulation results are provided to show the performance of the new complex-step approximations on the divided-difference filter. I. Introduction Filtering algorithms, such as the extended Kalman filter (EKF), the Unscented Kalman filter (UKF) and Particle filters 3,4 (PFs), are commonly used to both estimate unmeasurable states and filter noisy measurements. The EKF and UKF assume that the process noise and measurement noise are represented by zero-mean Gaussian white-noise processes. Even if this is true, both filters only provide approximate solutions when the state and/or measurement models are nonlinear, since the posterior density function is most often non-gaussian. The EKF typically works well only in the region where the first-order Taylorseries linearization adequately approximates the non-gaussian probability density function (pdf). The UKF falls into the category of filters that do not require linearizations of the state dynamics propagation and measurement model equations. 5 Reference 6 further classifies these filters collectively as Linear Regression Kalman filters, since they all fall into the category of filters that linearize the statistics of the nonlinear models instead of the nonlinear models themselves. While finding the Jacobian and Hessian matrices are not needed, these filters attempt to capture the statistics of transformations via finite-difference equations. However, these filters still fall into the general category of the Kalman filter that updates the propagated states linearly as a function of the difference between estimated measurements and actual measurements. The UKF works on the premise that with a fixed number of parameters it should be easier to approximate a Gaussian distribution than to approximate an arbitrary nonlinear function. This in essence can provide higher-order moments for the computation of the posterior function without the need to calculate jacobian matrices as required in the EKF. Still, the standard form of the EKF has remained the most popular method for nonlinear estimation to this day, and other designs are investigated only when the performance of this standard form is not sufficient. The Divided-Difference Filter (DDF) uses divided-difference approximations of derivatives based on Stirling s interpolation formula. 7 In comparison with the EKF, the DDF results in a similar mean, but a different posterior covariance. The DDF uses techniques based on similar principles to those of the UKF. As stated in Ref. 8, the DDF and the UKF simply retain a different subset of cross-terms in the higher-order expansions. The DDF has a smaller absolute error in the fourth-order term and also Ph.D., Department of Mechanical & Aerospace Engineering. klai@buffalo.edu. Student Member AIAA. Associate Professor, Department of Mechanical & Aerospace Engineering. johnc@eng.buffalo.edu. Associate Fellow AIAA. of 8

2 guarantees positive semi-definiteness of the posterior covariance, while the UKF may result in a non-positive semi-definite covariance. The second-order Divided-Difference (DD) filter from Ref. 5 is a more generalized version of the UKF that offers the same mean estimation accuracy. However, the covariance estimation is more accurate in the DD filter from more accurate treatment of the Gaussian statistics. Also, the DD filter operates at the square-root level. This paper deals with complex-step approximations of derivatives for use in the DDF. Using complex numbers for computational purposes is often intentionally avoided because the nonintuitive nature of this domain. However, this perception should not handicap our ability to seek better solutions to the problems associated with traditional (real-valued) finite-difference approaches. Many physical world phenomena actually have their roots in the complex domain. 9 As an aside we note that some interesting historical notes on the discovery and acceptance of the complex variable can also be found in this reference. A brief introduction of complex variable could be found on Ref., pp The complex-step derivative approximation () can be used to determine first derivatives in a relatively easy way, while providing near analytic accuracy. Early work on obtaining derivatives via a complex-step approximation in order to improve overall accuracy is shown by Lyness and Moler, as well as Lyness. 9 Various recent papers reintroduce the complex-step approach to the engineering community. 6 The advantages of the complex-step approximation approach over a standard finite difference include: ) the Jacobian approximation is not subject to subtractive cancelations inherent in roundoff errors, ) it can be used on discontinuous functions, and 3) it is easy to implement in a black-box manner, thereby making it applicable to general nonlinear functions. In this paper, the first and second-order finite differences used in derivation of the DD filter will be replaced with complex-step derivative approximations and thus generalizes it to the complex domain. The organization of this paper is as follows. First, the complex-step approximation to the derivative is reviewed for both the first and second-order derivatives. Then, the generalized complex-step approximation is summarized for both the scalar and vector cases. Next, the approximations are used to approximate the mean and covariance of a stochastic function, followed by the second-order approximation. Then, the second-order complex divided-difference filter is shown. Finally, simulation comparisons are made between the standard DD and new filters. II. Complex-Step Approximation to the Derivative In this section the complex-step approximation is shown. First, the derivative approximation of a scalar variable is summarized, followed by an extension to the second derivative. Then, approximations for multivariable functions are presented for the Jacobian and Hessian matrices. Numerical finite difference approximations for any order derivative can be obtained by Cauchy s integral formula 7 This function can be approximated by f (n) (z) = n! πi Γ f (n) (z) n! m f mh j= f(ξ) dξ () (ξ z) n+ ( z + h e iπj m e iπjn m where h is the step size and i is the imaginary unit,. For example, when n =, m = f (z) = ] f(z + h) f(z h) h We can see that this formula involves a substraction that would introduce near cancelation errors when the step size becomes too small. II.A. First Derivative The derivation of the complex-step derivative approximation is accomplished by approximating a nonlinear function with a complex variable using a Taylor s series expansion: 5 f(x + ih) = f(x) + ihf (x) h! f (x) i h3 3! f(3) (x) + h4 4! f(4) (x) + (4) ) () (3) of 8

3 Taking only the imaginary parts of both sides gives { } I f(x + ih) = hf (x) h3 3! f(3) (x) + (5) Dividing by h and rearranging yields { } I f(x + ih) f (x) = + h O(h ) h 3! f(3) (x) + (6) Terms with order h or higher can be ignored since the interval h can be chosen up to machine precision. Thus, to within first-order the complex-step derivative approximation is given by { } I f(x + ih) f (x) =, E trunc (h) = h h 6 f(3) (x) (7) where E trunc (h) denotes the truncation error. Note that this solution is not a function of differences, which ultimately provides better accuracy than a standard finite difference. Point of Diminishing Return Log Error Total Error Truncation Error Roundoff Error Log Step Size Figure. Finite Difference Error Versus Step Size II.B. Second Derivative In order to derive a second derivative approximation, the real components of Eq. (4) are taken, which gives { } h { } R! f (x) = f(x) R f(x + ih) + h4 4! f(4) (x) + (8) Solving for f (x) yields f (x) =! { }] h f(x) R f(x + ih) +!h f (4) (x) + (9) 4! Analogous to the approach shown before, we truncate up to the second-order approximation to obtain f (x) = { }] f(x) h R f(x + ih), E trunc (h) = h f(4) (x) () As with Cauchy s formula, we can see that this formula involves a substraction that may introduce machine cancelation errors when the step size is too small. Hence, this approach is subject to roundoff errors for small step sizes since difference errors arise, as shown by the classic plot in Figure. As the step size increases the accuracy decreases due to truncation errors associated with not adequately approximating the true slope at the point of interest. Decreasing the step size increases the accuracy, but only to an optimum point. Any further decrease results in a degradation of the accuracy due to roundoff errors. Hence, a tradeoff between truncation errors and roundoff exists. In fact, through numerous simulations, the complex-step second-derivative approximation is markedly worse than a standard finite-difference approach. 8 3 of 8

4 Figure. Various Complex Numbers III.A. Scalar Case III. Generalized Complex-Step Approximations Reference 8 presents a generalization of the complex-step approach, which showed better performance characteristics in the first and second-order approximations than the standard complex-step derivatives. The basic idea involves a linearization about a general complex number, rather than i alone. The step size for the difference and average operators are augmented with a unity magnitude complex number: ( δf(x) f x + ) ( eθi h f x ) eθi h (a) µf(x) f ( x + eθi h ) + f ( x eθi h where θ is the associated angle of departure from the positive real axis. Figure shows the unity magnitude complex number raised to various rational number powers with common denominator of 6, i.e. multiple of 5. Let s take a moment and revisit the Taylor series nominally at x: )] (b) f(x) = f( x) + f ( x)(x x) +! f ( x)(x x) + 3! f(3) ( x)(x x) 3 () and a Taylor series with step size of +e θi h and e θi h: f(x) = f( x + e θi h) = f( x) + f ( x)(e θi h) +! f ( x)(e θi h) + 3! f(3) ( x)(e θi h) 3 + 4! f(4) ( x)(e θi h) 4 + (3a) f(x) = f( x e θi h) = f( x) f ( x)(e θi h) +! f ( x)(e θi h) 3! f(3) ( x)(e θi h) 3 + 4! f(4) ( x)(e θi h) 4 + (3b) A Taylor series expansion evaluates the derivative information of an analytical function at a precise point and assumes these derivatives to remain valid around the vicinity of this point. For highly nonlinear functions, the derivative calculations deviate quickly from the nominal point. Therefore, derivative information with uniform performance across a region of interest should be used. This is achieved with derivatives derived by using interpolation. Another advantage is that interpolations generally require only function evaluations 4 of 8

5 and not analytical derivations. The Stirling interpolation 9 is chosen to obtain the derivation information. With the Stirling interpolation, an approximation of a nonlinear function can be expressed as f(x) = f( x + e θi ph) = f( x) + pµδf( x) + ( p +! p δ f( x) + 3 ( p ) µδ 3 f( x) + 4! p (p )δ 4 ) µδ 5 f( x) + (4) Generally, < p < as for interpolation within the region of interest, between h and +h. Concentrating only on the first two derivative expansions gives where f ( x) and f truncation error: f(x) f( x) + f ( x)(x x) +! f ( x)(x x) (5) ( x) are the first and second complex-step derivative approximations without the f (x) = f(x + eiθ h) f(x e iθ h) e iθ h (6a) f (x) = f(x + eiθ h) f(x) + f(x e iθ h) (e iθ h) (6b) Equation (5) is basically a Taylor series with derivatives replaced with s. Assuming f is analytic, substituting Eqs. (3) into Eqs. (6) for Eq. (5) yields f( x) + f ( x)(x x) +! f ( x)(x x) = f( x) + f ( x)(x x) +! f ( x)(x x) + 3! f(3) ( x)(e θi h) + ] 5! f(5) ( x)(e θi h) 4 + (x x) + 4! f(4) ( x)(e θi h) + ] 6! f(6) ( x)(e θi h) 4 + (x x) The first three terms on the right-hand-side of the equation are the first three terms of the Taylor series. The choice of h has an influence only on the remainder terms and the optimal choice of h is explored in Ref. 5 to be h = 3 for a Gaussian distribution. Another variable to be manipulated to our advantage is the θ value, which is related to the power of an imaginary number. III.B. Vector Case The scalar analysis is now extended to the multi-variable case, with x R n. The two Stirling operators in vector forms are simply δ p f(x) f (x + ) eθi hε p f (x ) eθi hε p, (8a) µ p f(x) f (x + ) eθi hε p + f (x )] eθi hε p, (8b) where the subscript p emphasizes it is the p th partial operator with ε p being the p th column of a p p identity matrix (the p th basis vector). Let y = f(x) denote a nonlinear vector transformation. Its Taylor series can be expressed as y = f( x + x) = p= p! Dp x f (7) = f( x) + D x f +! D δx f + 3! D3 δx f + (9) 5 of 8

6 where ] p D p x f = x + x + + x n f(x) () x x x n x= x The first two operators are simply D x f = D xf = n p= n p= q= ] x p f(x) x= x x p x p x q x p x q ] f(x) x= x (a) (b) or expressed with a Stirling interpolation: D x f = e iθ h x pµ p δ p ]f( x) D x f = n (e iθ h) ( x p ) δp + p= p= q=,q p x p x q (µ p δ p )(µ q δ q ) f( x) (a) (b) Restricting the series to second order gives y f( x) + D x f( x) +! D xf( x) (3) Equation (3) is just one of the many multi-variable extensions of the Taylor series. 8 IV.A. Truth Quantities IV. Approximation of Mean and Covariance The estimated mean and error covariance will be compared to the true mean and error covariance for performance comparison. The true mean and error covariance are simply x = E{x} (4a) P xx = E{x x]x x] T } where E denotes expectation. Additionally, the true mean after the nonlinear transformation, and its error covariance and the cross covariance need to be determined: (4b) ȳ T = E{f(x)} (5a) P yy,t = E{f(x) ȳ T ]f(x) ȳ T ] T } (5b) P xy,t = E{x x]f(x) ȳ T ] T } (5c) IV.B. Statistical Decoupling Equation (3) is one of the many multi-variable extensions using an interpolation approximation. Other extensions can be derived by using a linear transform of the original vector: and the new nonlinear transformation, z = S x (6) f(z) f(sz) = f(x) (7) 6 of 8

7 The Taylor series expansion for Eq. (7) is simply y f( z) + D z f( z) +! D z f( z) (8) Since Eq. (6) is just a linear constant transformation, the Taylor series expansions in Eqs. (3) and (8) are the same. However, this is not true with interpolation in place of the derivatives. For example, consider the first-order partial part of the Taylor series, D x f( x) = e iθ h = e iθ h n x p µ p δ p ]f( x) p= x p f( x + e iθ hε p ) + f( x e iθ hε p ) ] p= n D z f( z) = e iθ z p µ p δ p ] f( z) h p= = e iθ h = e iθ h = e iθ h p= p= p= z p f( z + e iθ hε p ) + f( z ] e iθ hε p ) z p f ( S z + e iθ hε p ] ) + f ( S z e iθ hε p ] )] S x p f ( x + e iθ hs p ) + f ( x e iθ hs p ) ] where s p is the p th column of S. It is apparent from Eqs. (9) that Eq. (3) will be different than Eq. (8). Any square symmetric positive-definite matrix can be decomposed into two triangular matrices, each equal to the transpose of the other, P = SS T. This is called the Cholesky decomposition and the decomposed matrix is referred as the Cholesky factor, S. Many other decomposition schemes exist, but the Cholesky decomposition is proved to be a computationally more efficient. One particularly useful transformation of x is by using the Cholesky factor of the state error-covariance matrix (P xx = S x Sx T ) to stochastically decouple x and z, z = Sx x (3) so that elements of z are now mutually independent from each other and has unity variance with itself, or (9a) (9b) E { z z]z z] T} = I, z = E {z} (3) z N(, I) (3) with z = z z; notice that the symmetrically distributed z translates their zero mean distribution. The advantage of this decoupling will be made clear in the subsequent analysis with z instead of x directly. Conversion back to x upon completion of analysis is trivial. Also, during these analysis, f(z) is defined z R n. V. Second-Order Approximation The second-order polynomial approximation of the true nonlinear transformation is simply y f( z) + D z f + D zf ( = f( z) + n e iθ z p µ p δ p ) f( z) h p= n + (e iθ h) ( z p ) δp + p= p= q=,q p z p z q (µ p δ p )(µ q δ q ) f( z) (33) 7 of 8

8 and its estimated quantity, ȳ = E {y}: { ( n } ȳ = E f( z) + (e iθ h) ( z p ) δp ) f( z) = f( z) + = f( z) + σ (e iθ h) (e iθ h) p= δp f( z) p= p= = (eiθ h) n (e iθ h) f( z) + (e iθ h) = (eiθ h) n (e iθ h) f( x) + (e iθ h) f( z + e iθ he p ) + f( z e iθ he p )] n (e iθ h) f( z) f( z + e iθ he p ) + f( z e iθ he p )] p= f( x + e iθ hs x,p ) + f( x e iθ hs x,p ) ] (34) p= From Eq. (33), notice that f( z) is a deterministic term, thus instead of y, y f( z) could be used in derivation of the covariance for y as this would simplify the intermediate analysis: P yy,t = E {y E {y}] y E {y}] T} = E { y]y] T } E {y} E {y} T { = E y f( z)]y f( z)] T} { E y f( z) } { f( z)} T E y (35) This leaves the estimate covariance as { P yy = E D z f + ] ] } T D z f D z f + D z f { } { } T E D z f + D z f E D z f + D z f ] } T = E{ D z f] D z f + { ] } T E D 4 z f] D z f { E D 4 z f } 3 T E{ D z f} (36) Note that all odd moments in Eq. (36) evaluate to zero due to the symmetric distribution of elements in z and they are uncorrelated with each other. Given the length of the individual analysis, each of the term, and 3 will be evaluated separately. The p th moment of and element of z is denoted as σ p. Also, as from Eq. (3), { the second moment of each z element is unity: ] } T E D z f] D z f ] } T E{ D z f] D z f = n ] (e iθ h) E n z p µ p δ p f( z) = σ (e iθ h) = = 4(e iθ h) 4(e iθ h) p= p= ] ] T µ p δ p f( z) µ p δ p f( z) p= p= ] T z p µ p δ p f( z) ] T f( z + e iθ hε p ) f( z + e iθ hε p )] f( x + e iθ hs x,p ) ] f( x + e iθ hs x,p ) ] T p= (37) where s x,p is the p th column of the square Cholesky factor from Eq. (3). 8 of 8

9 { ] } T E D z f] D z f consists of three kinds of terms, p, q, p q, { E ( z p ) δp f E ] ] } T ] T ( z p ) δp f = δp f δp f] σ4 (38a) { ] ] } T ] T ( z p ) δp f ( z q ) δq f = δp f δq f] σ (38b) ] ] } T ] T E{ z p z q µ p δ q µ p δ q f z p z q µ p δ q µ p δ q f = µ p δ p µ q δ q f µ p δ p µ q δ q f] σ (38c) { } T 3 E D z f E{ D z f} consists of two kinds of terms, p, q, p q, { } { } T ] T E ( z p ) δ p f E ( z p ) δ p f = δp f δp f] σ (39a) { } { } T ] T E ( z p ) δ p f E ( z q ) δ q f = δp f δq f] σ (39b) Terms in Eqs. (38b) and (39b) cancel each other. Terms in Eq. (39b) will be discarded from analysis for the reason explained in Ref. 7. Basically they do not constitute to better filter accuracy, while they are computationally expensive to calculate. The Stirling operators portion from Eqs. (38a) and (39a) are expanded as ] T δp f( z) δp f( z)] = f( z + e iθ hε p ) f( z e iθ hε p )] f( z + e iθ hε p ) f( z ] T e iθ hε p ) = f( x + e iθ hs x,p ) f( x e iθ hs x,p ) ] f( x + e iθ hs x,p ) f( x e iθ hs x,p ) ] T Again, σ = from the way z is generated. If the distribution is Gaussian, then σ 4 = h and h = 3; for the analysis refer to 3.3 of Ref. 7. Finally P yy becomes P yy = 4(e iθ h) f( x + e iθ hs x,p ) f( x e iθ hs x,p ) ] f( x + e iθ hs x,p ) f( x e iθ hs x,p ) ] T p= + (eiθ h) (e iθ h) 4 f( x + e iθ hs x,p ) + f( x e iθ hs x,p ) f( x) ] f( x + e iθ hs x,p ) + f( x e iθ hs x,p ) f( x) ] T p= Similarly, the cross-covariance can be derived as P xy = E { x x]y ȳ] T} { = E S x z] f( z) + D z f + D z f f( z) { } ] } T E D z f { ] } T = E S x z] D z f (4) (4) = e iθ h s x,p ] f( x + e iθ hs x,p ) f( x e iθ hs x,p ) ] T p= (4) again, odd moments evaluated to zero. VI. Second-Order Complex Divided-Difference Filter This section summarizes the algorithm for a second-order complex-step filter based on the DD filter. For the resemblance with the DD filter, the interested reader is again referred to Ref. 7 for complete derivations. The filter starts off with initializations of states, process noise covariance, measurement covariance and states 9 of 8

10 error covariance. The filter then enters into the measurement update and propagation loop until the last available measurement. The measurement update is sometimes referred as the a posteriori update while propagation is also called the a priori update or time update. Common notations are superscript + which denotes updated values, which denotes propagated values and k which denotes time index. A computationally efficient Cholesky square factor is used to maintain (and update if necessary) the covariances at the square-root level. A Householder triangulation is used as an efficient way to maintain the square Cholesky factor for the rectangular matrix. VI... Dynamical and Measurement Models. The system dynamics and measurement model are modeled as x(k + ) = fx(k),u(k),v(k)], v(k) N v(k), Q(k)] (43) ỹ(k) = gx(k),w(k)], w(k) N w(k), R(k)] (44) where v(k) and w(k) are i.i.d. (independent & identically distributed) random noise with given means and covariances, and u(k) is a known input vector. VI... Initialization. Initialize the states, error covariance ˆx () = ˆx, P () = P (45). Cholesky decomposition of the process noise covariance, measurement noise covariance and error covariance: VI..3. Measurement Update Q(k) = S v (k)sv T (k), R(k) = S w(k)sw T (k) (46) P (k) = Sx (k)s T x (k), P + (k) = S x + (k)s+t (k) (47). Given ˆx (k), Sx (k), w(k), θ and h, compute the following quantities: s () yx,p (k) s () yx,p(k) { gˆx e iθ (k) + e iθ hs x,q h (k), w(k)] gˆx (k) e iθ hs x,q (k), w(k)]} (48a) (e iθ h) (e iθ h) {gˆx (k) + e iθ hs x,q(k), w(k)] + gˆx (k) e iθ hs x,q(k), w(k)] gˆx (k), w(k))} S () yx (k) = s () yx, (k) s () yw,p (k) s () yw,p(k) s() yx, (k) s() yx,n x (k) ], S () yx (k) = x s () yx, (k) s() yx, (k) (48b) ] s() yx,n x (k) (48c) { gˆx e iθ (k), w(k) + e iθ hs w,q (k)] gˆx (k), w(k) e iθ hs w,q (k)] } (48d) h (e iθ h) (e iθ h) {gˆx (k), w(k) + e iθ hs w,q (k)) + gˆx (k), w(k) e iθ hs w,q (k)] gˆx (k), w(k)]} S yw () (k) = s () yw, (k) s() yw, (k) ] s() yw,n w (k), S yw () (k) = s () yw, (k) s() yw, (k) (48e) ] s() yw,n w (k) where n x and n w are the dimension of the state and measurement noise, respectively, s x,q(k) denotes the q th column of Sx (k), s() yx,p(k) and s () yx,p(k) denote the p th column of S yx () (k) and S yx () (k), respectively, s w,q (k) denotes the q th column of S w (k), and s () yw,p(k) and s () yw,p(k) denote the p th column of S yw () (k) and S yw () (k), respectively. (48f) of 8

11 . Compute the output estimate ŷ (k) = (eiθ h) n x n w (e iθ h) gˆx (k), w(k)] + + (e iθ h) (e iθ h) n x p= n w p= gˆx (k) + e iθ hs x,p(k), w(k)] + gˆx (k) e iθ hs x,p(k), w(k)] gˆx (k), w(k) + e iθ hs w,p (k)] + gˆx (k), w(k) e iθ hs w,p (k)] (49) 3. Perform a Householder Triangulation { S y (k) = H S yx () (k) S yw () (k) S yx () (k) S yw () (k) where H{ } denotes the Householder Triangulation operation. 4. Calculate the Kalman gain ]} (5) where S T yx (k) = 5. Calculate the updates S () yx (k) S () yx P xy (k) = Sx (k)s T yx (k) (5) K(k) = P xy (k) S y (k)sy T (k)] (5) ] T. (k) ˆx + (k) = ˆx (k) + K(k)ỹ(k) ŷ (k)] (53) { ]} S x + (k) = H Sx (k) K(k)S yx () (k) K(k)S yw () (k) K(k)S yx () (k) K(k)S yw () (k) (54) 6. If the error covariance matrix is desired, it can be computed via P + = S x + S x +T P + (k) = + VI..4. Propagation or ] ] T ] (k) K(k)S() yx (k) Sx (k) K(k)S() yx (k) + K(k)S yw () (k) Sx ] T K(k)S yw () yx () yw () K(k)S yw () K(k)S ] yx () K(k)S ] T (k) + K(k)S ] (k) ] T (k) (55). Calculate the following quantities s ()+ xx,p(k) { fˆx + e iθ (k) + e iθ hs + h x,q(k),u(k), v(k)] fˆx + (k) e iθ hs + x,q(k),u(k), v(k)] } (56a) (e s ()+ xx,p (k) iθ h) n x n w (e iθ h) {fˆx + (k) + e iθ hs + x,q (k),u(k), v(k)] + fˆx+ (k) e iθ hs + x,q (k),u(k), v(k)] fˆx + (k),u(k), v(k)]} S ()+ xx (k) = s ()+ xx, (k) s ()+ xv,p(k) e iθ h s()+ xx, (k) ] s()+ xx,n x (k), S xx ()+ (k) = s ()+ xx, (k) s()+ xx, (k) s()+ xx,n x (k) (56b) ] (56c) { fˆx + (k),u(k), v(k) + e iθ hs v,q (k)] fˆx + (k),u(k), v(k) e iθ hs v,q (k)] } (56d) s ()+ xv,p (k) (e iθ h) n x n w (e iθ h) {fˆx + (k),u(k), v(k) + e iθ hs v,q (k)] + fˆx + (k),u(k), v(k) e iθ hs v,q (k)] fˆx + (k),u(k), v(k)]} S xv ()+ (k) = s ()+ xv, (k) s()+ xv, (k) ] s()+ xv,n v (k), S xv ()+ (k) = s ()+ xv, (k) (56e) ] s()+ xv, (k) s()+ xv,n v (k) (56f) of 8

12 where n v is the dimension of the process noise, s + x,q(k) denotes the q th column of S x + (k), s ()+ xx,p(k) and s ()+ xx,p(k) denote the p th column of S xx ()+ (k) and S xx ()+ (k), respectively, s v,q (k) denotes the q th column of S v (k), and s ()+ xv,p(k) and s ()+ xv,p(k) denote the p th column of S xv ()+ (k) and S xv ()+ (k), respectively.. Propagate the states ˆx k+ = (eiθ h) n x n v (e iθ h) fˆx + (k),u(k), v(k)] + + (e iθ h) (e iθ h) n x p= n v p= fx + (k) + e iθ hs + x,p (k),u(k), v(k)] + fˆx+ (k) e iθ hs + x,p (k),u(k), v(k)] fˆx + (k),u(k), v(k) + e iθ hs v,p (k)] + fˆx + (k),u(k), v(k) e iθ hs v,p (k)] (57) 3. The propagation of S x follows S x (k + ) = H { S xx ()+ (k) S xv ()+ (k) S xx ()+ (k) S xv ()+ (k) ]} (58) Body Radar Z r( t) M x ( t) x ( t) Altitude Figure 3. Vertically Falling Body Example VII. Performance Evaluation This section presents some performance related figures to compare the performance of the standard DD and filters. The sample problem chosen for the comparison is Athan s problem, which involves estimation of altitude, velocity and the ballistic coefficient of a vertically falling body, as shown in Figure 3. The equations of motion of the system are given by ẋ (t) = x (t) ẋ (t) = e αx(t) x (t)x 3(t) ẋ 3 (t) = (59a) (59b) (59c) where x (t) is the altitude, x (t) is the downward velocity, x 3 (t) is the constant ballistic coefficient and α = 5 5 is a constant that s relates air density with altitude. The range observation model is given by ỹ k = M + (x,k Z) + ν k (6) where ν k is the observation noise, and M and Z are constants. These parameters are given by M = 5 and Z = 5. The variance of ν k is given by 4. All simulations are run with a sampling interval of sec with a total run time of 5 sec. Fifty Monte Carlo runs are performed and their average is used for performance comparison. A 4 th -order Runge-Kutta of 8

13 Absolute value of average altitude error (m) DD Absolute value of average velocity error (m/sec) DD Time (sec) (a) Position Time (sec) (b) Velocity Absolute value of average error in ballistic coefficient, γ DD Time (sec) (c) Ballistic Coefficient Figure 4. Sample state estimates from DD and filters. Simulations performed with h =, angle = 45 and measurement variance R = 4 integration routine is used to propagate the states between each sampling interval. A few variables that are manipulated to assess the performance comparison include h, the angle θ and the measurement covariance. Nominally, h = 3 for a Gaussian distribution and the measurement covariance is 4 m, except when its influence on performance is being evaluated. Figure 4 shows a sample of state estimation from both the DD and filters. The angle is set to 9. The simulation is run with measurement model covariance of 4 m instead of the nominal value of 4 m. Figures 5-9 show the performance comparison between DD and filter. Figure 5 compares the total absolute between the two filters but subsequent figures compare them in terms of percentage. For Figs. 5 and 6, the angle of 45 is used since it has same magnitude for both the real and imaginary components. In these figures, h is increased from until one of the filter becomes unstable. This is also a way to evaluate the Gaussianity of the system. Then in Fig. 7, the angle is varied from (which is essentially standard DD filter) to 8 to look for the most suitable angle. Next in Fig. 8, different h values are tested again. Finally in Fig. 9, the measurement covariance is slowly increased from 4 m to the point of an unstable filter condition. The absolute error is first used as a performance measure to gauge range of performance. This is the total error over the 5 sec filter run averaging from 5 Monte Carlo simulations. Later, only the percentage difference is used to more accurately present the relative performance change. The DD results are first used as a performance base for each of the comparisons. Figure 4 gives the first glance into behavior of the DD and filters. It clearly shows that the 3 of 8

14 5 DD 3 DD 4 5 Total Error, Position (m) 3 9 Total Error, Velocity (m/sec) Square of Step Size, h (a) Position Square of Step Size, h (b) Velocity 8. x 3 8. DD Total Error, Ballistic Coefficient, γ Square of Step Size, h (c) Ballistic Coefficient Figure 5. Absolute error, varying h. Simulations performed with angle = 45 and measurement variance R = 4. filter is better in both position and velocity estimates. At about seconds into simulation the system experiences the lowest observability and thus is most challenging for the filters. Physically, this occurs when the object is on the same horizontal level as the radar. The filter consistently outperforms the DD filter during this low observability period. This indicates that the is able to better capture the system nonlinearity. However, Fig. 4(c) does not show a clear advantage for estimation of the ballistic coefficient. The total absolute error in the ballistic coefficient estimation in almost every case favors the DD filter. Though it is merely by a small percentage margin as shown in subsequent runs. The DD filter shows more abrupt movement in estimating the ballistic coefficient, which is more smooth for the filter. Figure 5 evaluates the effect of h in the form of total estimation error of each state. Figure 6 shows the same information but as a percentage change based on the first DD results. From now onwards, the percentage change will be used as relative performance index. A value of h = 3 is optimal for Gaussian systems, but obviously our nonlinear system is not be very Gaussian. In reference to Fig. 6, low h values work better for both filters. At h =, both filters exhibit the best possible performance characteristics for position and velocity estimation. Conversely, estimation error of ballistic coefficient is minimized at h =. Estimation error in the ballistic coefficient for filter, however, bottoms at h = and increases with increasing h. For these reasons, h = embodies a good compromise for all three states and this will be used for succeeding runs. The DD filter becomes unstable at h beyond 9, but the filter is still able to give reasonable estimates. Moreover, the performance deteriorates much slower than the DD filter, once 4 of 8

15 9 DD 35 3 DD 8 Total Error, Position (%) Total Error, Velocity (%) Square of Step Size, h Square of Step Size, h (a) Position (b) Velocity DD Total Error, Ballistic Coefficient, γ (%) Square of Step Size, h (c) Ballistic Coefficient Figure 6. Percentage of error, varying h. Simulations performed with angle = 45 and measurement variance R = 4. again showing the robustness of the filter. The optimal angle for the filter is now examined. Figure 7 shows the effect of different angles from (this reduces down to just plain DD filter) to 8. It clearly reveals that 9 is the most optimal angle, which means using just a purely imaginary number, i. At this angle, the position estimate is improved by over 5% while the velocity estimate is improved impressively by over %. The less than.7% deterioration of the ballistic coefficient estimate is negligible compared to the vast performance gained in both position and velocity estimate. Thus e 9 i h or just ih should be used as the step size. It is important to note that this angle is optimal only for this particular estimation problem. Other problems may require different values for this angle. The effect of h with the optimal angle is now shown. Figure 8 shows that the sensitivity of the newly improved filter to various h is significantly reduced. The has so far shown to be more robust than DD filter in almost every measure. The last test is the influence of measurement noise. The measurement noise is slowly increased until the filters become unstable. From Fig. 9, the DD filter becomes unstable with measurement variance, R, greater than 3 4, however, it takes R to be greater than 6 4 to make the filter unstable. Figures 9(a) and 9(b) show the terminal position and velocity estimates of the filter are still within the limit of operation for the DD filter, however, the ballistic coefficient estimate for the filter at R = 6 4 is evidently higher than the limit of DD filter. Thus it is speculated that the failure of filter beyond R = 6 4 may be due to unreasonably accuracy in the 5 of 8

16 Total Error, Position (%) 3 4 Total Error, Velocity (%) Angle, deg (a) Position Angle, deg (b) Velocity.7 Total Error, Ballistic Coefficient, γ (%) Angle, deg (c) Ballistic Coefficient Figure 7. Percentage of error, varying angle. Simulations performed with h = and measurement variance R = 4. ballistic coefficient estimate. VIII. Conclusion In this paper, the first and second-order s were used in substitution of the first and second-order central divided (finite) difference formulae in the derivation of the DD filter. The analysis shows that it is as easy as substituting h in central divided-difference formulae with one involving a unity magnitude complex number, e iθ h. This method generalized the DD filter with a complex step size. Assessments were carried out to determine suitable values for h and angle. Also, the robustness of the was shown with higher measurement noise. All measures favor the filter. These proved the robustness of filter in the face of high nonlinearity. The square-root level filtering of the DD and filters may also provide numerical stability advantages in handling extremely small variables or states with large magnitude differences among them. References Kalman, R. E. and Bucy, R. S., New Results in Linear Filtering and Prediction Theory, Journal of Basic Engineering, March 96, pp of 8

17 DD 35 DD 8 3 Total Error, Position (%) 6 4 Total Error, Velocity (%) Square of Step Size, h (a) Position Square of Step Size, h (b) Velocity DD Total Error, Ballistic Coefficient, γ (%) Square of Step Size, h (c) Ballistic Coefficient Figure 8. Percentage of error, varying h. Simulations performed with angle = 9 and measurement variance R = 4. Wan, E. and van der Merwe, R., The Unscented Kalman Filter, Kalman Filtering and Neural Networks, edited by S. Haykin, chap. 7, John Wiley & Sons, New York, NY,. 3 Gordon, N. J., Salmond, D. J., and Smith, A. F. M., Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation, IEE Proceedings, Part F - Communications, Radar and Signal Processing, Vol. 4, No., Seattle, WA, April 993, pp Arulampalam, M. S., Maskell, S., Gordon, N., and Clapp, T., A Tutorial on Particle Filters for Online Nonlinear/Non- Gaussian Bayesian Tracking, IEEE Transactions on Signal Processing, Vol. 5, No., Feb., pp Nørgaard, M., Poulsen, N. K., and Ravn, O., New Developments in State Estimation for Nonlinear Systems, Automatica, Vol. 36, No., Nov., pp Lefebvre, T., Bruyninckx, H., and Shutter, J. D., Kalman Filters for Nonlinear Systems: A Comparison of Performance, International Journal of Control, Vol. 77, No. 7, May 4, pp Nørgaard, M., Poulsen, N. K., and Ravn, O., Advances in Derivative-Free State Estimation for Nonlinear Systems, Technical Report IMM-REP-998-5, Department of Mathematical Modelling, DTU, 998, Revised April. 8 van der Merwe, R. and Wan, E. A., Efficient Derivative-Free Kalman Filters for Online Learning, Proceedings of European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium,. 9 Lyness, J. N., Numerical Algorithms Based on the Theory of Complex Variable, Proceedings - A.C.M. National Meeting, 967, pp Greenberg, M. D., Advanced Engineering Mathematics, Prentice Hall, Upper Saddle River, New Jersey, nd ed., 998. Lyness, J. N. and Moler, C. B., Numerical Differentiation of Analytic Functions, SIAM Journal for Numerical Analysis, Vol. 4, No., June 967, pp.. Martins, J. R. R. A., Sturdza, P., and Alonso, J. J., The Connection Between the Complex-Step Derivative Approximation and Algorithmic Differentiation, AIAA Paper -9, Jan.. 7 of 8

18 4 35 DD 7 6 DD Total Error, Position (%) Total Error, Velocity (%) Measurement Variance, R (m ) x 4 (a) Position Measurement Variance, R (m ) x 4 (b) Velocity Total Error, Ballistic Coefficient, γ (%) DD Measurement Variance, R (m ) x 4 (c) Ballistic Coefficient Figure 9. Percentage of error, varying measurement variance. Simulations performed with h = and angle = 9. 3 Kim, J., Bates, D. G., and Postlethwaite, I., Nonlinear Robust Performance Analysis using Complex-Step Gradient Approximations, Automatica, Vol. 4, No., 6, pp Cerviño, L. I. and Bewley, T. R., On the extension of the complex-step derivative technique to pseudospectral algorithms, Journal of Computational Physics, Vol. 87, No., 3, pp Squire, W. and Trapp, G., Using Complex Variables to Estimate Derivatives of Read Functions, SIAM Review, Vol. 4, No., Mar. 998, pp.. 6 Martins, J. R. R. A., Sturdza, P., and Alonso, J. J., The Complex-Step Derivative Approximation, ACM Transactions on Mathematical Software, Vol. 9, No. 3, Sept. 3, pp Martins, J. R. R. A., Kroo, I. M., and Alonso, J. J., An Automated Method for Sensitivity Analysis Using Complex Variables,,, AIAA Lai, K.-L. and Crassidis, J. L., Generalizations of the Complex-Step Derivative Approximation, AIAA Guidance, Navigation, and Control Conference, Keystone, CO, Aug. 6, AIAA Fröberg, C.-E., Introduction to Numerical Analysis, Addison-Wesley Publishing Company, Reading, MA, second edition ed., 969. Athans, M., Wishner, R. P., and Bertolini, A., Suboptimal State Estimation for Continuous-Time Nonlinear Systems from Discrete Noisy Measurements, IEEE Transactions on Automatic Control, Vol. 3, No. 5, Oct. 968, pp of 8

Using complex numbers for computational purposes is often intentionally avoided because the nonintuitive

Using complex numbers for computational purposes is often intentionally avoided because the nonintuitive Generalizations of the Complex-Step Derivative Approximation Kok-Lam Lai and John L. Crassidis University at Buffalo, State University of New York, Amherst, NY 4-44 This paper presents a general framework

More information

A new unscented Kalman filter with higher order moment-matching

A new unscented Kalman filter with higher order moment-matching A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

Extension of the Sparse Grid Quadrature Filter

Extension of the Sparse Grid Quadrature Filter Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang

More information

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,

More information

UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION

UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION AAS-04-115 UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION Matthew C. VanDyke, Jana L. Schwartz, Christopher D. Hall An Unscented Kalman Filter (UKF) is derived in an

More information

NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION

NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION Kyle T. Alfriend and Deok-Jin Lee Texas A&M University, College Station, TX, 77843-3141 This paper provides efficient filtering algorithms

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

The Scaled Unscented Transformation

The Scaled Unscented Transformation The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented

More information

A New Nonlinear Filtering Method for Ballistic Target Tracking

A New Nonlinear Filtering Method for Ballistic Target Tracking th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering

More information

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS Miroslav Šimandl, Jindřich Duní Department of Cybernetics and Research Centre: Data - Algorithms - Decision University of West

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

Gaussian Filters for Nonlinear Filtering Problems

Gaussian Filters for Nonlinear Filtering Problems 910 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Gaussian Filters for Nonlinear Filtering Problems Kazufumi Ito Kaiqi Xiong Abstract In this paper we develop analyze real-time accurate

More information

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory

More information

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory

More information

Constrained State Estimation Using the Unscented Kalman Filter

Constrained State Estimation Using the Unscented Kalman Filter 16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and

More information

Extended Kalman Filter Tutorial

Extended Kalman Filter Tutorial Extended Kalman Filter Tutorial Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Dynamic process Consider the following

More information

in a Rao-Blackwellised Unscented Kalman Filter

in a Rao-Blackwellised Unscented Kalman Filter A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.

More information

Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems

Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 2011 Combined Particle and Smooth Variable Structure Filtering for Nonlinear Estimation Problems S. Andrew Gadsden

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

Tuning of Extended Kalman Filter for nonlinear State Estimation

Tuning of Extended Kalman Filter for nonlinear State Estimation OSR Journal of Computer Engineering (OSR-JCE) e-ssn: 78-0661,p-SSN: 78-877, Volume 18, ssue 5, Ver. V (Sep. - Oct. 016), PP 14-19 www.iosrjournals.org Tuning of Extended Kalman Filter for nonlinear State

More information

Tracking an Accelerated Target with a Nonlinear Constant Heading Model

Tracking an Accelerated Target with a Nonlinear Constant Heading Model Tracking an Accelerated Target with a Nonlinear Constant Heading Model Rong Yang, Gee Wah Ng DSO National Laboratories 20 Science Park Drive Singapore 118230 yrong@dsoorgsg ngeewah@dsoorgsg Abstract This

More information

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS AAS 6-135 A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS Andrew J. Sinclair,JohnE.Hurtado, and John L. Junkins The concept of nonlinearity measures for dynamical systems is extended to estimation systems,

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics Sensitivity Analysis of Disturbance Accommodating Control with Kalman Filter Estimation Jemin George and John L. Crassidis University at Buffalo, State University of New York, Amherst, NY, 14-44 The design

More information

RESEARCH ARTICLE. Online quantization in nonlinear filtering

RESEARCH ARTICLE. Online quantization in nonlinear filtering Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

A Strict Stability Limit for Adaptive Gradient Type Algorithms

A Strict Stability Limit for Adaptive Gradient Type Algorithms c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms

More information

Nonlinear State Estimation! Extended Kalman Filters!

Nonlinear State Estimation! Extended Kalman Filters! Nonlinear State Estimation! Extended Kalman Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Deformation of the probability distribution!! Neighboring-optimal

More information

Target Localization using Multiple UAVs with Sensor Fusion via Sigma-Point Kalman Filtering

Target Localization using Multiple UAVs with Sensor Fusion via Sigma-Point Kalman Filtering Target Localization using Multiple UAVs with Sensor Fusion via Sigma-Point Kalman Filtering Gregory L Plett, Pedro DeLima, and Daniel J Pack, United States Air Force Academy, USAFA, CO 884, USA This paper

More information

Optimal Gaussian Filtering for Polynomial Systems Applied to Association-free Multi-Target Tracking

Optimal Gaussian Filtering for Polynomial Systems Applied to Association-free Multi-Target Tracking 4th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, Optimal Gaussian Filtering for Polynomial Systems Applied to Association-free Multi-Target Tracking Marcus Baum, Benjamin

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis Of The Non-Stationary Signals

A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis Of The Non-Stationary Signals International Journal of Scientific & Engineering Research, Volume 3, Issue 7, July-2012 1 A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis

More information

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise MarcBodson Department of Electrical Engineering University of Utah Salt Lake City, UT 842, U.S.A. (8) 58 859 bodson@ee.utah.edu

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

Reduced Sigma Point Filters for the Propagation of Means and Covariances Through Nonlinear Transformations

Reduced Sigma Point Filters for the Propagation of Means and Covariances Through Nonlinear Transformations 1 Reduced Sigma Point Filters for the Propagation of Means and Covariances Through Nonlinear Transformations Simon J. Julier Jeffrey K. Uhlmann IDAK Industries 91 Missouri Blvd., #179 Dept. of Computer

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

Evolution Strategies Based Particle Filters for Fault Detection

Evolution Strategies Based Particle Filters for Fault Detection Evolution Strategies Based Particle Filters for Fault Detection Katsuji Uosaki, Member, IEEE, and Toshiharu Hatanaka, Member, IEEE Abstract Recent massive increase of the computational power has allowed

More information

Ground Moving Target Parameter Estimation for Stripmap SAR Using the Unscented Kalman Filter

Ground Moving Target Parameter Estimation for Stripmap SAR Using the Unscented Kalman Filter Ground Moving Target Parameter Estimation for Stripmap SAR Using the Unscented Kalman Filter Bhashyam Balaji, Christoph Gierull and Anthony Damini Radar Sensing and Exploitation Section, Defence Research

More information

Angular Velocity Determination Directly from Star Tracker Measurements

Angular Velocity Determination Directly from Star Tracker Measurements Angular Velocity Determination Directly from Star Tracker Measurements John L. Crassidis Introduction Star trackers are increasingly used on modern day spacecraft. With the rapid advancement of imaging

More information

Nonlinear State Estimation! Particle, Sigma-Points Filters!

Nonlinear State Estimation! Particle, Sigma-Points Filters! Nonlinear State Estimation! Particle, Sigma-Points Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Particle filter!! Sigma-Points Unscented Kalman ) filter!!

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD

More information

Kalman Filters with Uncompensated Biases

Kalman Filters with Uncompensated Biases Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method

446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method 446 CHAP. 8 NUMERICAL OPTIMIZATION Newton's Search for a Minimum of f(xy) Newton s Method The quadratic approximation method of Section 8.1 generated a sequence of seconddegree Lagrange polynomials. It

More information

Confidence Estimation Methods for Neural Networks: A Practical Comparison

Confidence Estimation Methods for Neural Networks: A Practical Comparison , 6-8 000, Confidence Estimation Methods for : A Practical Comparison G. Papadopoulos, P.J. Edwards, A.F. Murray Department of Electronics and Electrical Engineering, University of Edinburgh Abstract.

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

Randomized Unscented Kalman Filter in Target Tracking

Randomized Unscented Kalman Filter in Target Tracking Randomized Unscented Kalman Filter in Target Tracking Ondřej Straka, Jindřich Duník and Miroslav Šimandl Department of Cybernetics, Faculty of Applied Sciences, University of West Bohemia, Univerzitní

More information

ROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1

ROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1 ROUST CONSTRINED ESTIMTION VI UNSCENTED TRNSFORMTION Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a a Department of Chemical Engineering, Clarkson University, Potsdam, NY -3699, US.

More information

Efficient Monitoring for Planetary Rovers

Efficient Monitoring for Planetary Rovers International Symposium on Artificial Intelligence and Robotics in Space (isairas), May, 2003 Efficient Monitoring for Planetary Rovers Vandi Verma vandi@ri.cmu.edu Geoff Gordon ggordon@cs.cmu.edu Carnegie

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.

More information

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering With Polynomial Chaos Raktim Bhattacharya Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering with PC Problem Setup. Dynamics: ẋ = f(x, ) Sensor Model: ỹ = h(x)

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate

Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate Extension of Farrenopf Steady-State Solutions with Estimated Angular Rate Andrew D. Dianetti and John L. Crassidis University at Buffalo, State University of New Yor, Amherst, NY 46-44 Steady-state solutions

More information

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France. AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER Qi Cheng and Pascal Bondon CNRS UMR 8506, Université Paris XI, France. August 27, 2011 Abstract We present a modified bootstrap filter to draw

More information

DESIGN AND IMPLEMENTATION OF SENSORLESS SPEED CONTROL FOR INDUCTION MOTOR DRIVE USING AN OPTIMIZED EXTENDED KALMAN FILTER

DESIGN AND IMPLEMENTATION OF SENSORLESS SPEED CONTROL FOR INDUCTION MOTOR DRIVE USING AN OPTIMIZED EXTENDED KALMAN FILTER INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 ISSN 0976 6464(Print)

More information

Learning Gaussian Process Models from Uncertain Data

Learning Gaussian Process Models from Uncertain Data Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada

More information

A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem

A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem Gabriel Terejanu a Puneet Singla b Tarunraj Singh b Peter D. Scott a Graduate Student Assistant Professor Professor

More information

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise I. Bilik 1 and J. Tabrikian 2 1 Dept. of Electrical and Computer Engineering, University

More information

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

Dual Estimation and the Unscented Transformation

Dual Estimation and the Unscented Transformation Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology

More information

Error analysis of dynamics model for satellite attitude estimation in Near Equatorial Orbit

Error analysis of dynamics model for satellite attitude estimation in Near Equatorial Orbit International Journal of Scientific and Research Publications, Volume 4, Issue 10, October 2014 1 Error analysis of dynamics model for satellite attitude estimation in Near Equatorial Orbit Nor HazaduraHamzah

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

Estimating Polynomial Structures from Radar Data

Estimating Polynomial Structures from Radar Data Estimating Polynomial Structures from Radar Data Christian Lundquist, Umut Orguner and Fredrik Gustafsson Department of Electrical Engineering Linköping University Linköping, Sweden {lundquist, umut, fredrik}@isy.liu.se

More information

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline

More information

Extended Object and Group Tracking with Elliptic Random Hypersurface Models

Extended Object and Group Tracking with Elliptic Random Hypersurface Models Extended Object and Group Tracing with Elliptic Random Hypersurface Models Marcus Baum Benjamin Noac and Uwe D. Hanebec Intelligent Sensor-Actuator-Systems Laboratory ISAS Institute for Anthropomatics

More information

The field of estimation addresses the question of how to form the best possible estimates of a system s

The field of estimation addresses the question of how to form the best possible estimates of a system s A Multipurpose Consider Covariance Analysis for Square-Root Information Filters Joanna C. Hinks and Mark L. Psiaki Cornell University, Ithaca, NY, 14853-7501 A new form of consider covariance analysis

More information

The Unscented Particle Filter

The Unscented Particle Filter The Unscented Particle Filter Rudolph van der Merwe (OGI) Nando de Freitas (UC Bereley) Arnaud Doucet (Cambridge University) Eric Wan (OGI) Outline Optimal Estimation & Filtering Optimal Recursive Bayesian

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Wind-field Reconstruction Using Flight Data

Wind-field Reconstruction Using Flight Data 28 American Control Conference Westin Seattle Hotel, Seattle, Washington, USA June 11-13, 28 WeC18.4 Wind-field Reconstruction Using Flight Data Harish J. Palanthandalam-Madapusi, Anouck Girard, and Dennis

More information

The Shifted Rayleigh Filter for 3D Bearings-only Measurements with Clutter

The Shifted Rayleigh Filter for 3D Bearings-only Measurements with Clutter The Shifted Rayleigh Filter for 3D Bearings-only Measurements with Clutter Attila Can Özelçi EEE Department Imperial College London, SW7 BT attila.ozelci@imperial.ac.uk Richard Vinter EEE Department Imperial

More information

Target tracking and classification for missile using interacting multiple model (IMM)

Target tracking and classification for missile using interacting multiple model (IMM) Target tracking and classification for missile using interacting multiple model (IMM Kyungwoo Yoo and Joohwan Chun KAIST School of Electrical Engineering Yuseong-gu, Daejeon, Republic of Korea Email: babooovv@kaist.ac.kr

More information

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering

Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTS

A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTS A KALMAN FILTERING TUTORIAL FOR UNDERGRADUATE STUDENTS Matthew B. Rhudy 1, Roger A. Salguero 1 and Keaton Holappa 2 1 Division of Engineering, Pennsylvania State University, Reading, PA, 19610, USA 2 Bosch

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

A Novel Maneuvering Target Tracking Algorithm for Radar/Infrared Sensors

A Novel Maneuvering Target Tracking Algorithm for Radar/Infrared Sensors Chinese Journal of Electronics Vol.19 No.4 Oct. 21 A Novel Maneuvering Target Tracking Algorithm for Radar/Infrared Sensors YIN Jihao 1 CUIBingzhe 2 and WANG Yifei 1 (1.School of Astronautics Beihang University

More information

Estimation for Nonlinear Dynamical Systems over Packet-Dropping Networks

Estimation for Nonlinear Dynamical Systems over Packet-Dropping Networks Estimation for Nonlinear Dynamical Systems over Packet-Dropping Networks Zhipu Jin Chih-Kai Ko and Richard M Murray Abstract Two approaches, the extended Kalman filter (EKF) and moving horizon estimation

More information

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 2010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 2010 WeC17.1 Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 (1) Graduate Student, (2) Assistant

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Multiplicative vs. Additive Filtering for Spacecraft Attitude Determination

Multiplicative vs. Additive Filtering for Spacecraft Attitude Determination Multiplicative vs. Additive Filtering for Spacecraft Attitude Determination F. Landis Markley, NASA s Goddard Space Flight Center, Greenbelt, MD, USA Abstract The absence of a globally nonsingular three-parameter

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Comparison of Kalman Filter Estimation Approaches for State Space Models with Nonlinear Measurements

Comparison of Kalman Filter Estimation Approaches for State Space Models with Nonlinear Measurements Comparison of Kalman Filter Estimation Approaches for State Space Models with Nonlinear Measurements Fredri Orderud Sem Sælands vei 7-9, NO-7491 Trondheim Abstract The Etended Kalman Filter (EKF) has long

More information

A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters

A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters 18th International Conference on Information Fusion Washington, DC - July 6-9, 2015 A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters Zhansheng Duan, Xiaoyun Li Center

More information

THE well known Kalman filter [1], [2] is an optimal

THE well known Kalman filter [1], [2] is an optimal 1 Recursive Update Filtering for Nonlinear Estimation Renato Zanetti Abstract Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation

More information

Perception: objects in the environment

Perception: objects in the environment Zsolt Vizi, Ph.D. 2018 Self-driving cars Sensor fusion: one categorization Type 1: low-level/raw data fusion combining several sources of raw data to produce new data that is expected to be more informative

More information

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South

More information

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information