NONLINEAR ESTIMATION WITH STATE-DEPENDENT GAUSSIAN OBSERVATION NOISE. D. Spinello & D. J. Stilwell. acas

Size: px
Start display at page:

Download "NONLINEAR ESTIMATION WITH STATE-DEPENDENT GAUSSIAN OBSERVATION NOISE. D. Spinello & D. J. Stilwell. acas"

Transcription

1 NONLINEAR ESIMAION WIH SAE-DEPENDEN GAUSSIAN OBSERVAION NOISE D. Spinello & D. J. Stilwell acas Virginia Center for Autonomous Systems Virginia Polytechnic Institute & State University Blacksburg, VA March 8, 2010 echnical Report No. VaCAS Copyright c 2008

2 Summary We consider the problem of estimating the state of a system when measurement noise is a function of the system s state. We propose generalizations of the iterated extended Kalman filter and of the extended Kalman filter that can be utilized when the state estimate distribution is approximately Gaussian. he state estimate is computed by an iterative rootsearching method that maximize a maximum likelihood function. For sensor network applications, we also address distributed implementations involving multiple sensors. i

3 Contents Nomenclature 1 1 Introduction 2 2 Problem description State transition Observation model Example system State transition Bearing-only sensors Iterated extended Kalman filter with state-dependent observation noise Bayesian paradigm State prediction State update Summary of the iterated Kalman filter for state-dependent observation noise General case Sensors with scalar output Estimation in a decentralized communication network Sensors network Bayesian implementation State prediction State update Communication of raw sensor data Communication of likelihoods Conclusions 19 References 21 ii

4 Nomenclature D. Spinello & D. J. Stilwell he quantities labeled with a subscript i are referred to the i th sensor. In Section 3, whenever the subscript is dropped, the corresponding function have the same meaning as below without being referred to a specific sensor. k... integer labeling a time instant IR... set of real numbers x... process state vector f )... process state transition function Q... process noise covariance matrix n... process state dimension N... number of sensors z i... observation vector p... observation space dimension h i )... measurement function h i )... measurement function for p = 1 Σ i )... measurement noise covariance matrix σ i )... measurement noise variance for p = 1 p, )... oint probability density function p )... conditional probability density function N, )... normal multivariate probability density function E... expectation operator l i )... negative log-likelihood function ˆx i k l... process state estimator at time k given observations up to time l k x i k l := xk ˆx i k l... estimator error at time k given observations up to time l k P i k l... estimator error covariance at time k given observations up to time l k I i k {1,2,...,N}... set of integers labeling the sensors communicating with sensor i at time k tr... trace of a square matrix det... determinant of a square matrix x )... gradient with respect to x )... transposition sym +... space of symmetric positive-definite matrices Page 1

5 1 Introduction D. Spinello & D. J. Stilwell Advances in embedded computing and wireless communication are facilitating the use of wireless sensor networks for a variety of new applications, including target localization and tracking with bearing-only sensors 12, 25, 39 and with range-only sensors 44; optimal sensor placement and motion control strategies for target tracking 7, 8, 28, 41; formation and coverage control 3, 9, 13, 14; and environmental tracking and monitoring 33, 37, 38. In this report, we focus on the problem of distributed estimation of the state xk IR n of a dynamical process through a set of noisy measurements z i k IR p, i = 1,...,N, taken by N sensors at discrete time instants labeled by the integer k. We assume p n. he system state evolution and the measurements are modeled as stochastic processes with addictive Gaussian noise terms. he measurement noise covariance is a given function of the process state x. In works dedicated to sensors motion control strategies for target localization and tracking, the estimation problem is commonly solved by assuming the observation noise to be independent on the process state, see for example 7, 12, 28, 41. However, as pointed out in 25, this assumption is not realistic for some applications such as those in which bearings-only sensors are employed. We are interested in developing estimators that generalize the extended Kalman filter 1 to the case in which the observation noise is state dependent. By using a maximum likelihood approach coupled with the nonlinear root-searching Newton-Raphson algorithm, we derive a recursive algorithm to estimate the state of a stochastic process from measurements with known state dependent observation noise. As in the iterated extended Kalman filter, only the first two moments of the posterior probability distribution are propagated. Our work is motivated from the study of a class of cooperative motion-control problems in which a group of mobile sensors attempt to configure themselves spatially so that the noise associated to their measurements is minimized, and therefore the feature s estimate is the best achievable given the sensor model 7, 28. In particular, for a uniform linear array of bearing only sensors, the measurement noise depends on the relative distance to the target and on the orientation of the array with respect to the incident source signal 16. For the case in which the observation noise is state-independent, different filters have been proposed to address the nonlinear estimation problem in the Bayesian framework: the extended Kalman filter 1, 27, 29; the extended information filter 30; the iterated extended Kalman filter 1, 29; the linear regression Kalman filter 22, which comprises the central difference filter 34, the first-order divided difference filter 31, and the unscented Kalman filter 20; the sigma point Kalman filter 40, the iterated sigma point Kalman filter 36, and the iterated unscented Kalman filter 43. For a comparison between these different nonlinear filtering techniques see 23. A class of nonlinear filters based on Gaussian distributions with multiple moments propagation is proposed in 19. In 2 it is shown that the iterated extended Kalman filter update is an application of the Gauss-Newton method 5 for approximating a maximum likelihood estimate. he Gauss- Newton method is an approximation of the Newton-Raphson method 5 that applies to Page 2

6 minimum least square problems by discarding second derivatives in the iterates. he update method of the iterated extended Kalman filter reduces to that of the extended Kalman filter for a single iteration, and both reduce to the ordinary Kalman filter when the observation equation is affine in the state to be estimated. herefore, the extended Kalman filter is inherently suboptimal in the fact that it propagates only the first two moments of the estimator probability density function, and in the fact that the root-searching method used to find the maximum likelihood estimate is not iterated to convergence. For bearings-only tracking, an estimation criterion based on gradient-search iterative methods has been proposed in 18. he convergence of iterative methods for bearing-only tracking has been addressed in 21. he rest of the report is organized as follows. In Section 2 we briefly describe the models for the process evolution and for the observations. In Section 3 we derive the iterated extended Kalman filter and the extended Kalman filter updates for a single sensor with state dependent observation noise, and specialize the equations for the case in which the sensor s output is a scalar measurement. In Section 4 we introduce a cooperative scenario in which multiple sensors share information, and we show how the individual estimates are accordingly modified. Section 6 is left for conclusions. 2 Problem description 2.1 State transition he evolution of the state is described by a nonlinear discrete-time stochastic state transition equation xk +1 = f xk)+νk 1) wherex R n,f : IR n IR n isthepossiblynonlinearstatetransitionfunction,andνk IR n is an addictive process noise. We assume that ν is a sequence of independent random variables with normal probability distribution N0, Q), therefore satisfying the relations Eνk = 0 k 2a) E νkν l = Qkδ kl k,l 2b) where E is the expectation operator, Q is the target s noise covariance matrix, δ kl is the Kronecker delta, and denotes transposition. 2.2 Observation model We assume that there are N sensors that each generate measurements of the system 1) at discrete time instants. he observation made by the i th sensor at time k is z i k = h i xk)+v i xk) 3) wherez i k IR p, withp n, h i : IR n IR p istheobservationfunctionofthemodel, andthe noise v i : IR n IR p is a sequence of independent random variables with normal probability Page 3

7 distribution N 0,Σ i x)), where Σ i sym + IR p p in the observation noise covariance, and sym + denotes the space of symmetric positive-definite matrices. We emphasize that Σ i depends on the state x. It follows that the measurements z i k can be treated as realizations of multivariate normal distributions described by the conditional moments Ez i k xk = h i xk) 4a) E z i k h i xk))z i k h i xk)) xk = Σ i xk) 4b) We assume that noise terms associated to the same sensor are time uncorrelated, and that noise terms associated to different sensors are mutually independent, that is E v i xk)v xl) = Σ i xk)δ kl δ i k,l, i, 5) Additionally, we assume the following cross-correlation independence condition 1 E νkv i xl) = 0 i,k,l 6) 2.3 Example system For the purpose of illustration, we briefly describe an example system for which measurement noise is dependent on the state of the system State transition Consider a moving target in plane motion, and let X : IR n x Xx) = κ IR 2 7) be a linear function that maps the target state x to its position in the plane κ, see Figure 1. For nearly constant velocity targets, small fluctuations in the velocity can be modeled as noise, see for example 1. In this case, the state vector x includes the position and the velocity, and the traectory has the following state-space representation xk = fkxk 1+Gkνk 8) where the process noise represents the uncertainty of acceleration. he matrices f and G are derived through the kinematics of a point mass with constant acceleration: 1 0 γ 0 γ 2 /2 0 f = γ , G = 0 γ 2 /2 γ 0 9) γ were γ is the observation time interval. A refined model with constant acceleration included into the state vector has been proposed in 10. Page 4

8 Figure 1: Sketch of the kinematics of a vehicle equipped with a bearing-only sensor Bearing-only sensors We suppose that the state of a moving target is observed by several vehicles mobile sensors), each equipped with a bearings only sensor. Let q i IR 2 be the position of the center of mass of vehicle i, and ψ i IR the angle formed by the axis of the vehicle with respect to some reference direction, see Figure 1. he vector representing the line of site between the sensor i and the target is κ q i. he output of sensor i at time k is the scalar z i k representing the bearing angle with respect to the broadside of the sensor, see Figure 1. In this case, the measurement function h i, which is the scalar equivalent of h i in 3), is given by h i q i,ψ i,x) = π 2 +ψ i θ i q i,x) ) κ q i ) e 2 θ i q i,x) = arctan κ q i ) e 1 10a) 10b) where e 1 = 1 0 ) and e2 = 0 1 ) are unit basis vectors for the rectangular Cartesian coordinate system {ξ 1,ξ 2 }, see Figure 1. he model for the noise variance is given by, see 16, 25, 24 σ i q i,ψ i,x) = α dq i,ψ i,x) cos 2 h i q i,ψ i,x) 11) where σ i > 0 is the scalar equivalent of Σ i in 3), the scalar function d is the inverse of the signal to noise ratio, and α is a constant that depends on parameters of the source signal and the sensor. Expressions for α when the sensor is a uniform linear array are given in Page 5

9 16. Note that the variance of the measurement noise in 11) approaches infinity as the bearing angle approaches π/2, and it is maximum when the bearing angle approaches 0, see Figure 1. For vehicles modeled as point masses, a model for the noise covariance adopted in 7, 8, 41 is σ i q i,ψ i,x) = dq i,ψ i,x) = a 2 q i κ a 1 ) 2 +a 0, in which a 0, a 1, and a 2 are constant parameters. his model corresponds to the assumption of the existence of a sweet spot in sensing, located at a distance a 1 to the target, where uncertainty in measurement is minimal. 3 Iterated extended Kalman filter with state-dependent observation noise In this Section we focus on the derivation of the iterated extended Kalman filter and extended Kalman filter prediction and update equations for a single sensor. 3.1 Bayesian paradigm he approach that is commonly adopted for state estimate of a discrete-time dynamical system can be described as a two-stage recursive process of prediction and update. his is true in particular for the Kalman filter and its extensions to nonlinear systems, the extended Kalman filter and the iterated extended Kalman filter, see for example 29, 30. Let zk IR p be the observation at time k. Since we restrict our attention to a single sensor, we drop the label i in the observation equation 3). he probabilistic information contained in sensor measurement z about the unknown state x is described by the conditional probability distribution pzk xk), known as the likelihood function. From Bayes rule, see 1, Appendix B.11, we obtain the following representation for the posterior conditional distribution of x given z pxk zk) = pzk xk)pxk) pzk) 12) where p ) is the marginal distribution. In order to reduce the uncertainty, one can consider several measurements taken over time to construct the posterior. We define the collection of observations up to time k Zk := {zk,zk 1,...,z0} 13a) In this case, the likelihood function is pzk xk), and the posterior is given by pxk Zk) = pzk xk)pxk) pzk) 14) Page 6

10 he posterior can also be computed recursively after each observation zk. Let p, ) be the oint probability distribution. By applying Bayes rule we obtain pzk,xk) = pzk,zk 1,xk) = pzk Zk 1,xk)pZk 1,xk) = pzk xk)pxk Zk 1)pZk 1) 15) where we used the assumption, intrinsic in the sensor model explained in Section 2.2, that the probability distribution of zk is independent on Zk 1 whenever xk is given. From Bayes rule we obtain also pzk,xk) = pxk Zk)pzk Zk 1)pZk 1) 16) By combining 15) and 16) we obtain the following recursive form of the posterior distribution pxk Zk) = βkpxk Zk 1)pzk xk) 17) where the proportionality factor is β 1 k = pzk Zk 1). he probability density function pxk Zk 1) is associated to a prior estimate of the state xk based on observations up to time k 1. he density pzk xk) can be interpreted as a correction to the prior data based on current observation. he recursive implementation in 17) is often used in estimation theory since it requires limited data storage 27. he updating scheme in 17) can be interpreted as using new information to correct a previous prediction. 3.2 State prediction Following 1, we define the state estimate at time k given the observations up to time l k, and the related error covariance matrix as ˆxk l := Exk Zl Pk l := E xk ˆxk l)xk ˆxk l) Zl 18a) 18b) It is assumed that there exists a state estimate ˆxk 1 k 1 at time k 1 and associated error covariance Pk 1 k 1. he obective is to find a prediction ˆxk k 1 of the state at time k based on information available up to time k 1. First, 1) is expanded in a aylor series about ˆxk 1 k 1. By truncating terms above the first order and applying definitions 18a) and 18b) with expectations conditioned on Zk 1, we obtain the state and the error covariance predictions ˆxk k 1 = Exk Zk 1 = f ˆxk 1 k 1) 19a) Pk k 1 = E xk ˆxk k 1)xk ˆxk k 1) Zk 1 = x f ˆxk 1 k 1)Pk 1 k 1 x f ˆxk 1 k 1)+Qk 19b) Page 7

11 where x is the gradient with respect to x see 1, for example). he predictions 19) are derived with the aid of 2a), 2b), and 6), and the property of the expectation operator, see 35, Section E Aab B = AE ab B 20) where a and b are random vectors, and A and B are matrices. Higher-order predictions can be obtained by retaining more terms in the aylor series expansion of the state model 1. We note that all the information about the target up to time k 1 is included in the estimate ˆxk 1 k 1, and therefore the computation of the predictions requires only the knowledge of the estimates at time k 1 without storage of past observation data. 3.3 State update After the derivation of the state and error covariance predictions, we return to the likelihood equation 17) to solve the update problem. We generalize the approach proposed in 2 to find an approximation of a maximum likelihood estimate. his approach allows us to derive the filter update equation for the case in which the covariance associated with the measurement is a function of the state to be estimated, see 5). he state prediction ˆxk k 1 and the measurement zk are treated as realizations of independent random vectors with multivariate normal distributions see 2) ˆxk k 1 N xk,pk k 1), zk N hxk),σxk)) 21) Although the distribution for ˆxk k 1 is not necessarily Gaussian in practice, it is commonly assumed that a Gaussian approximation is appropriate in many practical applications 12, 41, 25, 28, 32, 39, 44. herefore the probability density functions in 17) are given by detpk k 1) pxk Zk 1) = 1 2π) n exp 1 ) 2 xk ˆxk k 1) P 1 k k 1xk ˆxk k 1) detσxk)) 1 pzk xk) = 2π) p exp 1 ) 2 zk hxk)) xk)zk hxk)) where det is the determinant. 22a) 22b) he maximum likelihood estimator ˆxk k associated to the posterior pxk Zk) in 17) is the vector xk that maximizes the likelihood pxk Zk) or, equivalently, the vector xk that minimizes its negative logarithm, see for example 27. Substituting from 22) into 17), Page 8

12 and taking the negative logarithm we obtain lxk) = 1 ) lndetσxk)+zk hxk)) xk)zk hxk)) xk ˆxk k 1) P 1 k k 1xk ˆxk k 1)+c 23) where c is a constant not dependent on xk. he state estimate is given by the solution of the problem ˆxk k = argminlxk) 24) x which can be equivalently stated as a nonlinear unconstrained optimization problem. Under the hypothesis that l is twice continuous differentiable, the solution to the optimization problem is found through the Newton-Raphson iterative sequence 5 ˆx ι+1) k k = ˆx ι) k k x x l ˆx ι) k k ) 1 x l ˆx ι) k k ) 25) with initial guess ˆx 0) k k = ˆxk k 1, where ι) refers to the iteration step. For a single step iteration and state-independent noise covariance, the algorithm 25) defines the extended Kalman filter, while for multiple iterations it defines the iterated extended Kalman filter, see 2. he extended Kalman filter is suboptimal since the single step convergence of the Newton-Raphson iterates is guaranteed only if the function lx) is quadratic in his argument, see 5, which is true for the case that the observation equation is affine. In 2 the Gauss-Newton algorithm has been used in place of the Newton-Raphson algorithm. he Gauss-Newton algorithm applies to minimum least-square problems, and approximates the Hessian by discarding second derivatives of the residuals. his is consistent with the classical extended Kalman filter derivation, in which the observation function is approximated with a aylor series truncated at the first order. However, for the case we are studying, the function l cannot be expressed as a quadratic form because of the first term on the right hand side of 23). herefore we derive the update through 25). Computation of derivatives of the function l is facilitated by the following relationships, see 4 for example. Let A and B be matrices, with A non singular and dependent on a real parameter τ. hen τ deta = detatr A 1 A, τ A 1 τ = A 1 A τ A 1 trab = trba where tr is the trace operator. We also introduce the following notation 26a) 26b) ζxk) := zk hxk) 27) By regarding the quadratic terms in 23) as the trace of a 1 1 matrix, and by using 26b), we rewrite the likelihood function lx) in the more convenient form lxk) = 1 lndetσxk)+tr xk)ζxk)ζ xk) ) tr P 1 k k 1xk ˆxk k 1)xk ˆxk k 1) +c 28) Page 9

13 Byusing26a), thegradientofthelog-likelihoodfunctionisthen-vectorwhosel th component given by l xk) = tr P 1 k k 1xk ˆxk k 1)e l x l tr xk) Σxk) Ip xk)ζxk)ζ xk) ) x l 2 h xk) xk)ζxk) x l 29) where e l is the l th vector of the natural basis in IR n, that is e l = ) l th 30) and I p is the identity matrix in IR p p. We note that for the case in which the covariance matrix Σ is not state-dependent, 29) reduces to the familiar innovation term for the extended Kalman filter. he Hessian term in 25) is the n n-matrix with lm entry given by 2 l = tr P 1 k k 1e m e l x l x m tr 1 Σ Σ + Σ x l Σ ) + 2 Σ Ip ζζ ) x m x l x l x m ) 1 Σ Σ ζζ +2 h ζ x m x m +2 h 1 h Σ +2 h 1 Σ Σ ζ 2 2 h ζ x l x m x l x m x l x m 31) We note that also in this case, by evaluating the expression 31) at ˆxk k 1 and discarding second derivatives andtermsthatdepend onthederivatives oftheofthematrixσ, weobtain the familiar expression for the extended Kalman filter, see for example 30. Since the Gauss- Newton method approximates the Newton-Raphson method by neglecting second derivatives in the computation of the Hessian, its application to a log-likelihood with state-independent observation covariance gives the extended Kalman filter update 2. We rewrite the Hessian in 31) as x x lxk) = P 1 k k 1+Rk h 1 h Rk lm = tr Σ + Σ 1 Σ Σ ζζ 1 ) x l x m x l x m 2 I p h 1 Σ + Σ + h Σ ) ζ 2 h ζ x l x m x m x l x l x m Σ 2 Σ 1 Ip ζζ ) x l x m xk 32a) 32b) Page 10

14 he Fisher information matrix is defined as the expected value of the square of the score, that is F = E x l xl 33) However, for normal multivariate probability density functions 1 the following identity holds E xl x l = E x x l 34) Note that the sign on the right hand side is reversed with respect to the usual definition. his is due to the fact that l is the negative of the log-likelihood function, whereas the Fisher information matrix is defined in terms of the log-likelihood function. We use the fact that ζ is a multivariate normally distributed random vector with covariance Σ, and compute the Fisher information matrix from 32): F = E x x l = P 1 k k 1+ R Rlm = ER lm = h 1 h Σ + 1 x l x m 2 tr Σ 1 Σ Σ x l x m 35a) 35b) When the state and the observation equations are affine, the Newton-Raphson algorithm converges in a single step, and the covariance in 35) reduces to the Kalman filter error covariance derived in 42 for affine observations with state dependent noise. From the approximation that ˆx ι) is normally distributed, the posterior error covariance equals the inverse of the Fisher information matrix 30, Section 2.3. herefore P ι) k k = F ι) k 1 = P 1 k k 1+ R ι) k 1 36) WenotethatsincePk k 1and R ι) karesymmetricpositivedefinite, thematrixp ι) k k is also symmetric positive-definite, see 4. 4 Summary of the iterated Kalman filter for statedependent observation noise 4.1 General case From the nonlinear state equation 1) we have the first-order state and error covariance predictions: ˆxk k 1 = f ˆxk 1 k 1) 37a) Pk k 1 = x f ˆxk 1 k 1)Pk 1 k 1 x f ˆxk 1 k 1)+Qk 37b) 1 More generally, the relation 34) hold for all probability distributions whose density function satisfy a specific regularity condition, see 35, Proposition 3.1. Page 11

15 From the nonlinear measurement equation 3) we have the following first-order iterated extended Kalman filter updates: ˆx ι+1) k k = ˆx ι) k k P 1 k k 1+R ι) k 1 s ι) k s ι) k = l e l P 1 k k 1 ˆx ι) k k ˆxk k 1 ) + h ζ + 1 x l 2 tr Σ Ip ζζ ) x l ˆx ι) k k ζζ 1 2 I p R ι) k h = tr 1 h Σ + Σ Σ lm x l x m x l x m h 1 Σ + Σ + h Σ ) ζ 2 h ζ x l x m x m x l x l x m Σ 2 Σ 1 Ip ζζ ) x l x m ˆx ι) k k P ι) k k = P 1 k k 1+ R ι) k 1 Rι) k h = 1 h Σ + 1 lm x l x m 2 tr Σ 1 Σ Σ x l x m ˆx ι) k k ) 38a) 38b) 38c) 38d) 38e) Forasingle step iteration, 38) evaluatedat ˆx ι) k k = ˆxk k 1give theextended Kalman filter updates: ˆxk k = ˆxk k 1 P 1 k k 1+R 0) k 1 s 0) k s 0) k = h ζ + 1 l x l 2 tr Σ Ip ζζ ) x l ˆxk k 1 ζζ 1 2 I p R 0) k h = tr 1 h Σ + Σ Σ lm x l x m x l x m h 1 Σ + Σ + h Σ ) ζ 2 h ζ x l x m x m x l x l x m Σ 2 Σ 1 Ip ζζ ) x l x m ˆxk k 1 Pk k = P 1 k k 1+ R 0) k 1 R0) k h = 1 h Σ + 1 lm x l x m 2 tr Σ 1 Σ Σ x l x m ˆxk k 1 ) 39a) 39b) 39c) 39d) 39e) 4.2 Sensors with scalar output We now specialize 38) and 39) for the case in which the output of the sensor is the scalar quantity z, as for bearing-only sensors described in Sec herefore the dimension of Page 12

16 the observation space is p = 1. We introduce the scalar-valued functions h and σ > 0, which represent the observation expected value and noise variance, respectively. he functions h and σ are therefore the scalar equivalent of h and Σ in Section 3. Moreover, we introduce the scalar function ζ := z h. he iterated extended Kalman filter updates for sensors with scalar output are given by ˆxk k = ˆxk k 1 P 1 k k 1+R ι) k 1 s ι) k s ι) k = P 1 k k 1 ˆx ι) k k ˆxk k 1 ) + ζ σ xh+ 1 2σ 40a) ) 1 ζ2 σ xσ ˆx ι) k k 1 R ι) k = σ x h xh+ ζ σ 2 x h x σ + x σ xh ) + 1 ζ 2 σ 2 σ 1 ) x 2 σ xσ ζ σ x x h+ 1 1 ) ζ2 x x 2σ σ σ P ι) k k = P 1 k k 1+ R ι) k 1 1 R ι) k = σ x h xh+ 1 2σ 2 x σ xσ ˆx ι) k k ˆx ι) k k 40b) 40c) 40d) 40e) For a one-step iteration with ˆx ι) k k = ˆxk k 1 we obtain the extended Kalman filter for sensors with scalar output: ˆxk k = ˆxk k 1 P 1 k k 1+R 0) k 1 s 0) k s 0) k = ζ σ xh+ 1 ) 1 ζ2 2σ σ xσ ˆxk k 1 1 R 0) k = σ x h xh+ ζ σ 2 x h x σ + x σ xh ) + 1 ζ 2 σ 2 σ 1 ) x 2 σ xσ ζ σ x xh+ 1 ) 1 ζ2 x 2σ σ xσ Pk k = P 1 k k 1+ R 0) k 1 1 R 0) k = σ xh x h+ 1 2σ xσ 2 x σ ˆxk k 1 ˆxk k 1 41a) 41b) 41c) 41d) 41e) Page 13

17 5 Estimation in a decentralized communication network 5.1 Sensors network In Section 4 we derived the update equations of an estimator that generalizes the extended Kalman filter and the iterated extended Kalman filter for a single sensor. he same analysis, with appropriate adaptation of the observation space dimension, directly applies to a centralized system in which a set of sensors communicate only with a central unit that has complete knowledge of the group 17. In this Section we consider a sensor network comprised of N nodes, in which all sensor nodes acquire measurements, each sensor independently computes a local estimate of the system state x based on information sensed locally and by information that is communicated to it by other sensor nodes the network. For a fully connected network, in which each sensor receives and sends all the measurements from and to the other sensors in the network at each update instant, individual estimates are the same for all the sensors 27. Network topologies for which local estimates are identical through information sharing and local data fusion have been studied in 6, 17, 26. In contrast, we address the case where each local estimate may be different at any point it time. We briefly introduce the Bayesian formalism to describe estimation across a sensor network with a time-varying network topology. Consider the case in which there is no external unit receiving and sending information, so that the agents exchange information through direct communication. his leads to the definition of a decentralized communication network structure associated to the group of sensors. Individual agents locally acquire measurements z i k at discrete times k according to the model in 3). In order to model the communication network, we define the timedependent sets of integers I i k {1,2,...,N}, for i = 1,2,...,N that represents the set of sensor nodes that communicate with sensor i at time k. Note that i I i k for all k, meaning that each sensor node has access to local information. herefore, if vehicle i does not receive any data at time k we have I i k = {i}. Each local state estimate is denotes by ˆx i. Moreover, we assume that pz i k z k,xk) = pz i k xk) i 42) his formalizes the assumption that measurements from different sensors are independent, see 1. A more refined structure that accounts for correlation between observations is discussed in Bayesian implementation We define the collection of all the available information to sensor i up to time k k Z i k := {z l : I i l} 43) l=0 Page 14

18 A local estimate at sensor node i is computed from the posterior probability distribution pxk Z i k), which accounts for all the data available to sensor i up to time k. As in 12), application of Bayes rule, see 1, Appendix B.11, yields the posterior conditional distribution of xk given Z i k pxk Z i k) = pz ik xk)pxk) pz i k) 44) Application of Bayes rule as in 17) gives the recursive form pxk Z i k) = β i kpxk Z i k 1) I i k pz k xk) 45) wheretheproportionalityfactorisβ 1 i k = I i k pz k Z i k 1). hetermpxk Z i k 1) is the prior distribution, and accounts for sensor s i measurements up to time k 1, and for measurements received by sensor i up to time k State prediction State prediction in the case of a sensor network is almost identical to that presented in Section 3.2. For each sensor node i, we define the state estimate and the error covariance at time k given the individual and the received observations up to time l k as ˆx i k l := Exk Z i l, i = 1,2,...,N 46a) P i k l := E x i k ˆx i k l)x i k ˆx i k l) Z i l 46b) It is assumed that for each sensor there exist the local state estimate ˆx i k 1 k 1 at time k 1, and associated error covariance P i k 1 k 1. By following exactly the same steps as in Section 3.2, we obtain the following individual state and error covariance predictions: ˆx i k k 1 = f ˆx i k 1 k 1) 47a) P i k k 1 = x f ˆx i k 1 k 1)P i k 1 k 1 x f ˆx i k 1 k 1)+Qk 47b) 5.4 State update he Bayesian scheme in 45), which is a direct consequence of assumption 42), is known as independent likelihood pool, see 27, Section Its implementation relies on communication of either raw sensor data z k or likelihoods pz k xk). According to the type of data communicated, we derive the two corresponding update equations. he negative log-likelihood function for sensor i with included sensor data received at time Page 15

19 k is given by l i xk) = 1 2 I i k D. Spinello & D. J. Stilwell ) lndetσ xk)+z k h xk)) xk)z k h xk)) xk ˆx ik k 1) P 1 i k k 1xk ˆx i k k 1)+C 48) where C does not depend on x. he local state update is obtained by applying the algorithm explained in Section 3.3, that is, by using Newton-Raphson iterations to solve the following unconstrained minimization problem ˆx i k k = argminl i xk) 49) x Communication of raw sensor data Sensor i computes a local state estimate update using Newton-Raphson iterations with initial guess ˆx i k k 1 to solve the unconstrained minimization problem 49). his procedure requires only the communication of raw sensor data, since the Newton-Raphson iterations are computed using the local prediction as initial guess. At the ι th iteration we have 1s ˆx ι+1) i k k = ˆx ι) i k k P 1 i k k 1+R ι) ι) i k i k ) s ι) i k = e l P 1 i k k 1 ˆx ι) i k k ˆx i k k 1 l + h ζ x + 1 l 2 tr Σ ) Ip ζ x ζ l I i k R ι) i k = tr h h + Σ Σ lm x l x m x l x m I i k ) h + Σ + h Σ ζ x l x m x m x 2 h ζ l x l x m Σ 1 P ι) i k k = Rι) i k = lm I i k 2 Σ Ip ζ x l x ζ m P 1 ι) R i k k 1+ h x l ) ˆx ι) i k k ˆx ι) i k k ζ ζ 1 2 I p ) 50a) 50b) 50c) 1 i k 50d) h + 1 x m 2 tr Σ Σ x l x m 50e) ˆx ι) i k k he implementation of the algorithm 50) requires either the sensors to be homogeneous, or the model of sensors I i k to be known by sensor i. By evaluating 50) at the prediction ˆx i k k 1, we obtain the extended Kalman filter for observation models with state-dependent noise. Page 16

20 For complete communication networks we have I i k = {1,...,N} for all i,k. his structure is equivalent to the centralized one, in which there is an external unit receiving and sending data to the vehicles, and prior estimates in 38) and 39) are common to all the vehicles 17. For sensors with scalar output, as the bearing-only sensors described in Section 2.3.2, the update equations are 1ŝι) ˆx ι+1) i k k = ˆx ι) i k k P 1 i k k 1+R ι) i k i k 51a) ) s ι) i k = P 1 i k k 1 ˆx ι) i k k ˆx i k k 1 + ζ x σ h + 1 ) 1 ζ2 x 2σ σ σ 51b) i I i k ˆx ι) i k k R ι) i k = 1 σ xh x h + ζ ) σ x h x σ + xσ x h I i k + 1 σ 2 P ι) i k k = P 1 R ι) i k = I i k ζ 2 σ 1 2 ) x σ x σ ζ σ x x h + 1 2σ ) 1 ζ2 x x σ σ ˆx ι) i k k 51c) R ι) i k 1 51d) i k k 1+ 1 σ xh x h + 1 2σ xσ 2 x σ ˆx ι) i k k 51e) Communication of likelihoods In some cases, it is desirable to share likelihoods pz k xk), instead of sensor measurements as in algorithm50). Since in our setting the likelihoods are Gaussian, this is equivalent to sharing the estimate ˆx and the covariance P. When communicating likelihoods, the estimate ˆx k k 1, I i k is the initial guess for the Newton-Raphson iterations that are used to solve 49). For one step iteration process, we obtain the following extended Kalman filter updates ˆx i k k = ˆx i k k 1 s 0) i k = l I i k R 0) i k = tr lm h Σ + x l I i k P 1 i k k 1+R 0) i k 1s 0) i k h ζ x + 1 l 2 tr Σ ) Ip ζ x ζ l h h + Σ Σ x l x m x l x m ) Σ ζ x 2 h ζ l x l x m + h x m x m ˆx k k 1 ζ ζ 1 2 I p ) 52a) 52b) Page 17

21 + 1 2 Σ 1 P i k k = 2 Σ Ip ζ x l x ζ m P 1 R0) i k = lm I i k i k k 1+ h D. Spinello & D. J. Stilwell x l R 0) ) ˆxk k 1 52c) 1 i k 52d) h + 1 x m 2 tr Σ Σ x l x m 52e) ˆx k k 1 For any time instant k in which there is no communication event relative to vehicle i, that is, for I i k = {i}, we modify the notation in 38) and define the partial estimates based only on the i th sensor s own measurement where s 0) i k = tr h i l R0 ik lm h + i h = tr i i x l Σ 1 i R0 ik = lm 0) x i k k = ˆx i k k 1 S i k s 0) i k S0) 1 i k = P 1 i k k 1+ R 0 i k P 1 i k k = P 1 i k k 1+ R0 i k i ζ x i + 1 l 2 i Σ i ) )ˆxik k 1 Ip i ζ x i ζ i l h i i + Σ i i Σ i i x l x m x l x m ) Σ i + h i Σ i i i ζ x m x m x i 2 h i l x l x m 2 Σ i Ip i ζ x l x i ζ i m h i h i i + 1 x l x m 2 tr ) ˆxik k 1 i Σ i Σ i i x l x m i ζ i ζ i 1 ) 2 I p i ζ i ˆx i k k 1 53a) 53b) 53c) 54a) 54b) 54c) By combining 52d), 52e), and 54c) we obtain the covariance assimilation equation 11 P 1 i k k = P 1 i k k 1+ P 1 ) k k P 1 k k 1 55) I i k which trivially holds for I i k = {i}. In order to obtain the state estimate assimilation equation, we define 1 S 0) i k = P 1 i k k 1+R 0) i k 56) and, by using definitions 52c) and 54b) we write 1 S 0) i k = P 1 i k k 1+ I i k S0) ) 1 k P 1 k k 1 57) Page 18

22 Moreover, from 53a) and 52b) we have s 0) i k = s 0) k D. Spinello & D. J. Stilwell I i k = I i k S 0) kˆx k k 1 x k k) 58) Substitution from 57) and 58) into 52a) gives the state estimate assimilation equation ˆx i k k = ˆx i k k 1 + P 1 i k k 1+ I i k I i k S0) ) 1 k P 1 k k 1 1 S 0) k x k k ˆx k k 1) 59) he covariance assimilation and state assimilation equations have been derived in 17, 11 for the extended Kalman filter and for the extended information filter. Whereas the assimilation equation for the covariance has the same structure, the state assimilation equation is different because the extended Kalman filter state update is linear in ζ, while for the case studied here the dependence on ζ is much more involved, see 52). he implementation of the state assimilation equations requires the communication of the covariance error info, the Hessian error info, and the state error info, respectively given by P S0) 1 1 k k P 1 k k 1, k P 1 k k 1, S0) k x k k ˆx k k 1) for a total of n2n+1) scalars. he size of the communication package can be reduced to nn + 1) scalars if sensor i has knowledge of the sensor model of the other sensors, from 1 which P k k can be computed as P k k = E S0) k 61) As in the case studied in Section 5.4.1, for complete communication networks defined by I i k = {1,...,N}, i,k, the decentralized structure is equivalent to the centralized one, and prior estimates are common to all the vehicles 17. o summarize, each sensor computes a partial estimate using 53) and 54), and assimilates other estimates eventually received through communication by using 59) and 55). If no data is received at time k, equations 59) and 55) reduce to identities. 60) 6 Conclusions We have derived the update equations for a class of filters that generalize the extended Kalman filter and the iterated extended Kalman filter to the case in which the noise related Page 19

23 to a nonlinear observation model is a known function of the state to be estimated. he proposed filters are suboptimal in the same sense as the extended Kalman filter, since the probability density function associated to the state estimator is approximated as Gaussian, which leads to the propagation of the first two statistical moments only. We have also considered a communication network structure that consists of a set of sensors that measure the state of a system. From a Bayesian formulation of the network, we have derived the filter update equations that account for information sharing among sensors. Each sensor maintains an individual estimator, and its individual predictions are updated by using individual and received data. his allows for an improvement of local estimates with respect to the case of a single vehicle operating individually, and therefore relying only on its own measurements. his work can be applied to formation control problems in which a group of mobile sensors attempt to configure themselves spatially in order to minimize the noise associated to their measurements of a mobile target, and therefore obtain the best estimate of the target. Page 20

24 References D. Spinello & D. J. Stilwell 1 Y. Bar-Shalom and. E. Fortman. racking and data association. Academic Press, B. M. Bell and F. W. Cathey. he iterated Kalman filter update as a Gauss-Newton method. IEEE ransactions on Automatic Control, 382): , C. Belta and V. Kumar. Abstraction and control for groups of robots. IEEE ransactions on Robotics, 205): , October D. S. Bernstein. Matrix Mathematics. Princeton University Press, D. P. Bertsekas and J. N. sitsiklis. Parallel and distributed computation: numerical methods. Prentice Hall, Englewood Cliffs, New Jersey, F. Bourgault and H. F. Durrant-Whyte. Communication in general decentralized filters and the coordinated search strategy. In Proceedings of the Seventh International Conference on Information Fusion, Stockholm, Sweden, 28 June - 1 July H. Chung, J. W. Burdick, and R. M. Murray. Decentralized motion control of mobile sensing agents in a network. In Proceedings of the IEEE International Conference on Robotics and Automation, Orlando, Florida, May H. Chung, V. Gupta, J. W. Burdick, and R. M. Murray. On a decentralized active sensing strategy using mobile sensor platforms in a network. In Proceedings of the IEEE conference on Decision and Control, Paradise Island, Bahamas, December J. Cortés, S. Martínez,. Karatas, and F. Bullo. Coverage control for mobile sensing networks. IEEE ransactions on Robotics and Automation, 202): , K. Doğançai. On the efficiency of a berings-only instrumental variable estimator for a target motion analysis. Signal Processing, 85: , H. F. Durrant-Whyte, B. Y. S. Rao, and H. Hu. oward a fully decentralized architecture for multi-sensor data fusion. In Proceedings of the IEEE International Conference on Robotics and Automation, volume 2, pages , May A. Farina. arget tracking with bearings-only measurements. Signal Processing, 78:61 78, J. A. Fax and R. M. Murray. Information flow and cooperative control of vehicle formations. IEEE ransactions on Automatic Control, 499): , September R. A. Freeman, P. Yang, and K. M. Lynch. Distributed estimation and control of swarm formation statistics. In Proceedings of the American Control Conference, pages , Minneapolis, Minnesota USA, June Page 21

25 15 N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian network classifiers. Machine Learning, 292): , October A. Gadre, M. Roan, and D. J. Stilwell. Sensor model for a uniform linear array. echnical Report , VaCAS, S. Grime and H. F. Durrant-Whyte. Data fusion in decentralized sensor networks. Control Engineering Practice, 25): , R. A. Iltis and K. L. Anderson. A consistent estimation criterion for multisensor bearingonly tracking. IEEE ransactions on Aerospace and Electronic Systems, 321): , January K. Ito and K. Xiong. Gaussian filters for nonlinear filtering problems. IEEE ransactions on Automatic Control, 455): , May J. Julier, S. Uhlmann and H. F. Durrant-Whyte. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE ransactions on Automatic Control, 453): , J. P. Le Cadre and C. Jauffret. On the convergence of iterative methods for bearing-only tracking. IEEE ransactions on Aerospace and Electronic Systems, 353): , July Lefebvre, H. Bruyninckx, and J. De Schutter. Comment on A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE ransactions on Automatic Control, 478): , Lefebvre, H. Bruyninckx, and J. De Shutter. Kalman filters for non-linear systems: a comparison of performance. International Journal of Control, 777): , A. Logothetis, A. Isaksson, and R. J. Evans. Comparison of suboptimal strategies for optimal own-ship maneuvers in bearing-only tracking. In Proocedings of the American Control Conference, pages , Philadelphia, Pennsylvania, June A. Logothetis, A. Isaksson, and R. J. Evans. An information theoretic approach to observer path design for bearings-only tracking. In Proceedings of the 36th Conference on Decision and Control, pages , San Diego, California, Dec A. Makarenko and H. F. Durrant-Whyte. Decentralized data fusion and control in active sensor networks. In Proceedings of the Seventh International Conference on Information Fusion, Stockholm, Sweden, 28 June - 1 July J. Manyika and H. Durrant-Whyte. Data fusion and sensor management: a decentralized information-theoretic approach. Ellis Horwood, London, S. Martínez and F. Bullo. Optimal sensor placement and motion coordination for target tracking. Automatica, 424): , Page 22

26 29 P. S. Maybeck. Stochastic models, estimation, and control, volume 2 of Mathematics in Science and Engineering. Academic Press, New York, A. G. O. Mutambara. Decentralized estimation and control for multisensor systems. CRC Press LLC, Boca Raton, Florida, M. Nørgaard, N. Poulsen, and O. Ravn. New developments in state estimations for nonlinear systems. Automatica, 3611): , Y. Oshman and P. Davidson. Optimization of observer traectories for bearings-only target localization. IEEE ransactions on Aerospace and Electronic Systems, 353): , M. Porfiri, D. G. Roberson, and D. J. Stilwell. racking and formation control of multiple autonomous agents: A two-level consensus approach. Automatica, 438): , Schei. A finite-difference method for linearisation in nonlinear estimation algorithms. Automatica, 3311): , J. Shao. Mathematical Statistics. Springer texts in statistics. Springer Verlag, New York, G. Sibley, G. Suckhatme, and L. Matthies. he iterated sigma point Kalman filter with application to long range stereo. In Proceedings of Robotics: Science and Systems, Philadelphia, PA, August S. Simic and S. Sastry. Distributed environmental monitoring using random sensor networks. In Proceeding of the 2nd International Workshop on Information Processing in Sensor Networks, pages , Palo Alto, CA, S. Susca, S. Martínez, and F. Bullo. Monitoring environmental boundaries with a robotic sensor network. In Proceedings of the American Control Conference, pages , S. hrun, Y. Liu, Koller D., and A. Y. Ng. Simultaneous localization and mapping with sparse extended information filters. International Journal of Robotics Research, 237-8): , July-August R. van der Merve. Sigma-point Kalman filters for probabilistic inference in dynamic state-space models. PhD thesis, Oregon Health & Science University, OGI School of Science & Engineering, P. Yang, R. A. Freeman, and K. M. Lynch. Distributed cooperative active sensing using consensus filters. In Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, Feb B. Zehnwirth. A generalization of the Kalman filter for models with state-dependent observation variance. Journal of the American Statistical Association, 83401): , Page 23

27 43 R. Zhan and J. Wan. Iterated unscented Kalman filter for passive target tracking. IEEE ransactions on Aerospace and Electronic Systems, 433): , July K. X. Zhou and S. I. Roumeliotis. Optimal motion strategies for range-only distributed target tracking. In Proceedings of the American Control Conference, pages , Minneapolis, Minnesota USA, June Page 24

State Estimation for Nonlinear Systems using Restricted Genetic Optimization

State Estimation for Nonlinear Systems using Restricted Genetic Optimization State Estimation for Nonlinear Systems using Restricted Genetic Optimization Santiago Garrido, Luis Moreno, and Carlos Balaguer Universidad Carlos III de Madrid, Leganés 28911, Madrid (Spain) Abstract.

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen

More information

The Scaled Unscented Transformation

The Scaled Unscented Transformation The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented

More information

Constrained State Estimation Using the Unscented Kalman Filter

Constrained State Estimation Using the Unscented Kalman Filter 16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and

More information

UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION

UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION AAS-04-115 UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION Matthew C. VanDyke, Jana L. Schwartz, Christopher D. Hall An Unscented Kalman Filter (UKF) is derived in an

More information

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,

More information

SENSOR ERROR MODEL FOR A UNIFORM LINEAR ARRAY. Aditya Gadre, Michael Roan, Daniel Stilwell. acas

SENSOR ERROR MODEL FOR A UNIFORM LINEAR ARRAY. Aditya Gadre, Michael Roan, Daniel Stilwell. acas SNSOR RROR MODL FOR A UNIFORM LINAR ARRAY Aditya Gadre, Michael Roan, Daniel Stilwell acas Virginia Center for Autonomous Systems Virginia Polytechnic Institute & State University Blacksburg, VA 24060

More information

Kalman Filters with Uncompensated Biases

Kalman Filters with Uncompensated Biases Kalman Filters with Uncompensated Biases Renato Zanetti he Charles Stark Draper Laboratory, Houston, exas, 77058 Robert H. Bishop Marquette University, Milwaukee, WI 53201 I. INRODUCION An underlying assumption

More information

On Decentralized Classification using a Network of Mobile Sensors

On Decentralized Classification using a Network of Mobile Sensors On Decentralized Classification using a Network of Mobile Sensors Timothy H. Chung, Joel W. Burdick, Richard M. Murray California Institute of Technology Division of Engineering and Applied Sciences Pasadena,

More information

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Introduction SLAM asks the following question: Is it possible for an autonomous vehicle

More information

Extension of the Sparse Grid Quadrature Filter

Extension of the Sparse Grid Quadrature Filter Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang

More information

in a Rao-Blackwellised Unscented Kalman Filter

in a Rao-Blackwellised Unscented Kalman Filter A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

ROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1

ROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1 ROUST CONSTRINED ESTIMTION VI UNSCENTED TRNSFORMTION Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a a Department of Chemical Engineering, Clarkson University, Potsdam, NY -3699, US.

More information

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM Kutluyıl Doğançay Reza Arablouei School of Engineering, University of South Australia, Mawson Lakes, SA 595, Australia ABSTRACT

More information

Information Exchange in Multi-rover SLAM

Information Exchange in Multi-rover SLAM nformation Exchange in Multi-rover SLAM Brandon M Jones and Lang Tong School of Electrical and Computer Engineering Cornell University, thaca, NY 53 {bmj3,lt35}@cornelledu Abstract We investigate simultaneous

More information

Gaussian Filters for Nonlinear Filtering Problems

Gaussian Filters for Nonlinear Filtering Problems 910 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 5, MAY 2000 Gaussian Filters for Nonlinear Filtering Problems Kazufumi Ito Kaiqi Xiong Abstract In this paper we develop analyze real-time accurate

More information

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN Dynamic Systems and Applications 16 (2007) 393-406 OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN College of Mathematics and Computer

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

Cooperative State Estimation for Mobile Sensors with Optimal Path Planning

Cooperative State Estimation for Mobile Sensors with Optimal Path Planning Optimal Control/Fall 2008 1 Cooperative State Estimation for Mobile Sensors with Optimal Path Planning Hossein Hajimirsadeghi Abstract This paper studies an optimal control problem of maximizing quality

More information

Distributed estimation in sensor networks

Distributed estimation in sensor networks in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent

More information

Recursive LMMSE Filtering for Target Tracking with Range and Direction Cosine Measurements

Recursive LMMSE Filtering for Target Tracking with Range and Direction Cosine Measurements Recursive Filtering for Target Tracing with Range and Direction Cosine Measurements Zhansheng Duan Yu Liu X. Rong Li Department of Electrical Engineering University of New Orleans New Orleans, LA 748,

More information

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation

Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Fisher Information Matrix-based Nonlinear System Conversion for State Estimation Ming Lei Christophe Baehr and Pierre Del Moral Abstract In practical target tracing a number of improved measurement conversion

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot

More information

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION

RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T DISTRIBUTION 1 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 3 6, 1, SANTANDER, SPAIN RECURSIVE OUTLIER-ROBUST FILTERING AND SMOOTHING FOR NONLINEAR SYSTEMS USING THE MULTIVARIATE STUDENT-T

More information

Bayesian Methods in Positioning Applications

Bayesian Methods in Positioning Applications Bayesian Methods in Positioning Applications Vedran Dizdarević v.dizdarevic@tugraz.at Graz University of Technology, Austria 24. May 2006 Bayesian Methods in Positioning Applications p.1/21 Outline Problem

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Tracking an Accelerated Target with a Nonlinear Constant Heading Model

Tracking an Accelerated Target with a Nonlinear Constant Heading Model Tracking an Accelerated Target with a Nonlinear Constant Heading Model Rong Yang, Gee Wah Ng DSO National Laboratories 20 Science Park Drive Singapore 118230 yrong@dsoorgsg ngeewah@dsoorgsg Abstract This

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters

A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters 18th International Conference on Information Fusion Washington, DC - July 6-9, 2015 A New Nonlinear State Estimator Using the Fusion of Multiple Extended Kalman Filters Zhansheng Duan, Xiaoyun Li Center

More information

Performance Analysis of Distributed Tracking with Consensus on Noisy Time-varying Graphs

Performance Analysis of Distributed Tracking with Consensus on Noisy Time-varying Graphs Performance Analysis of Distributed Tracking with Consensus on Noisy Time-varying Graphs Yongxiang Ruan, Sudharman K. Jayaweera and Carlos Mosquera Abstract This paper considers a problem of distributed

More information

Anytime Planning for Decentralized Multi-Robot Active Information Gathering

Anytime Planning for Decentralized Multi-Robot Active Information Gathering Anytime Planning for Decentralized Multi-Robot Active Information Gathering Brent Schlotfeldt 1 Dinesh Thakur 1 Nikolay Atanasov 2 Vijay Kumar 1 George Pappas 1 1 GRASP Laboratory University of Pennsylvania

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London Distributed Data Fusion with Kalman Filters Simon Julier Computer Science Department University College London S.Julier@cs.ucl.ac.uk Structure of Talk Motivation Kalman Filters Double Counting Optimal

More information

The Newton-Raphson Algorithm

The Newton-Raphson Algorithm The Newton-Raphson Algorithm David Allen University of Kentucky January 31, 2013 1 The Newton-Raphson Algorithm The Newton-Raphson algorithm, also called Newton s method, is a method for finding the minimum

More information

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model BULGARIAN ACADEMY OF SCIENCES CYBERNEICS AND INFORMAION ECHNOLOGIES Volume No Sofia Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model sonyo Slavov Department of Automatics

More information

DENSELY scattered low-cost sensor nodes provides a rich

DENSELY scattered low-cost sensor nodes provides a rich IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 8, AUGUST 2005 2997 Decentralized Sigma-Point Information Filters for Target Tracking in Collaborative Sensor Networks Tom Vercauteren, Student Member,

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

Distributed Estimation for Motion Coordination in an Unknown Spatially Varying Flowfield

Distributed Estimation for Motion Coordination in an Unknown Spatially Varying Flowfield Distributed Estimation for Motion Coordination in an Unknown Spatially Varying Flowfield Cameron K Peterson and Derek A Paley University of Maryland, College Park, MD, 742, USA I Introduction This note

More information

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline

More information

Linear Optimal State Estimation in Systems with Independent Mode Transitions

Linear Optimal State Estimation in Systems with Independent Mode Transitions Linear Optimal State Estimation in Systems with Independent Mode ransitions Daniel Sigalov, omer Michaeli and Yaakov Oshman echnion Israel Institute of echnology Abstract A generalized state space representation

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.

More information

A New Nonlinear Filtering Method for Ballistic Target Tracking

A New Nonlinear Filtering Method for Ballistic Target Tracking th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 9 A New Nonlinear Filtering Method for Ballistic arget racing Chunling Wu Institute of Electronic & Information Engineering

More information

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS

SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS SIGMA POINT GAUSSIAN SUM FILTER DESIGN USING SQUARE ROOT UNSCENTED FILTERS Miroslav Šimandl, Jindřich Duní Department of Cybernetics and Research Centre: Data - Algorithms - Decision University of West

More information

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Myungsoo Jun and Raffaello D Andrea Sibley School of Mechanical and Aerospace Engineering Cornell University

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object

More information

STONY BROOK UNIVERSITY. CEAS Technical Report 829

STONY BROOK UNIVERSITY. CEAS Technical Report 829 1 STONY BROOK UNIVERSITY CEAS Technical Report 829 Variable and Multiple Target Tracking by Particle Filtering and Maximum Likelihood Monte Carlo Method Jaechan Lim January 4, 2006 2 Abstract In most applications

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

Supplementary Note on Bayesian analysis

Supplementary Note on Bayesian analysis Supplementary Note on Bayesian analysis Structured variability of muscle activations supports the minimal intervention principle of motor control Francisco J. Valero-Cuevas 1,2,3, Madhusudhan Venkadesan

More information

Moving Horizon Filter for Monotonic Trends

Moving Horizon Filter for Monotonic Trends 43rd IEEE Conference on Decision and Control December 4-7, 2004 Atlantis, Paradise Island, Bahamas ThA.3 Moving Horizon Filter for Monotonic Trends Sikandar Samar Stanford University Dimitry Gorinevsky

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

This is the author s version of a work that was submitted/accepted for publication in the following source:

This is the author s version of a work that was submitted/accepted for publication in the following source: This is the author s version of a work that was submitted/accepted for publication in the following source: Ridley, Matthew, Upcroft, Ben, Ong, Lee Ling., Kumar, Suresh, & Sukkarieh, Salah () Decentralised

More information

Battery Level Estimation of Mobile Agents Under Communication Constraints

Battery Level Estimation of Mobile Agents Under Communication Constraints Battery Level Estimation of Mobile Agents Under Communication Constraints Jonghoek Kim, Fumin Zhang, and Magnus Egerstedt Electrical and Computer Engineering, Georgia Institute of Technology, USA jkim37@gatech.edu,fumin,

More information

A Hyperparameter-Based Approach for Consensus Under Uncertainties

A Hyperparameter-Based Approach for Consensus Under Uncertainties A Hyperparameter-Based Approach for Consensus Under Uncertainties The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications

More information

Performance Evaluation of Local State Estimation Methods in Bearings-only Tracking Problems

Performance Evaluation of Local State Estimation Methods in Bearings-only Tracking Problems 14th International Conference on Information Fusion Chicago, Illinois, USA, July -8, 211 Performance Evaluation of Local State Estimation ethods in Bearings-only Tracing Problems Ondřej Straa, Jindřich

More information

Algorithmisches Lernen/Machine Learning

Algorithmisches Lernen/Machine Learning Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines

More information

Expectation Propagation in Dynamical Systems

Expectation Propagation in Dynamical Systems Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

Dual Estimation and the Unscented Transformation

Dual Estimation and the Unscented Transformation Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation

Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation Proceedings of the 2006 IEEE International Conference on Control Applications Munich, Germany, October 4-6, 2006 WeA0. Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection REGLERTEKNIK Lecture Outline AUTOMATIC CONTROL Target Tracking: Lecture 3 Maneuvering Target Tracking Issues Maneuver Detection Emre Özkan emre@isy.liu.se Division of Automatic Control Department of Electrical

More information

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1

State Estimation by IMM Filter in the Presence of Structural Uncertainty 1 Recent Advances in Signal Processing and Communications Edited by Nios Mastorais World Scientific and Engineering Society (WSES) Press Greece 999 pp.8-88. State Estimation by IMM Filter in the Presence

More information

BAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS

BAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS BAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS Andrew A. Neath 1 and Joseph E. Cavanaugh 1 Department of Mathematics and Statistics, Southern Illinois University, Edwardsville, Illinois 606, USA

More information

Performance Analysis of an Adaptive Algorithm for DOA Estimation

Performance Analysis of an Adaptive Algorithm for DOA Estimation Performance Analysis of an Adaptive Algorithm for DOA Estimation Assimakis K. Leros and Vassilios C. Moussas Abstract This paper presents an adaptive approach to the problem of estimating the direction

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 2010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 2010 WeC17.1 Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3 (1) Graduate Student, (2) Assistant

More information

Expectation Propagation for Approximate Bayesian Inference

Expectation Propagation for Approximate Bayesian Inference Expectation Propagation for Approximate Bayesian Inference José Miguel Hernández Lobato Universidad Autónoma de Madrid, Computer Science Department February 5, 2007 1/ 24 Bayesian Inference Inference Given

More information

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p.

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. Preface p. xiii Acknowledgment p. xix Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. 4 Bayes Decision p. 5

More information

A Serial Approach to Handling High-Dimensional Measurements in the Sigma-Point Kalman Filter

A Serial Approach to Handling High-Dimensional Measurements in the Sigma-Point Kalman Filter Robotics: Science and Systems 2011 Los Angeles, CA, USA, June 27-30, 2011 A Serial Approach to Handling High-Dimensional Measurements in the Sigma-Point Kalman Filter Colin McManus and Timothy D. Barfoot

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Efficient Monitoring for Planetary Rovers

Efficient Monitoring for Planetary Rovers International Symposium on Artificial Intelligence and Robotics in Space (isairas), May, 2003 Efficient Monitoring for Planetary Rovers Vandi Verma vandi@ri.cmu.edu Geoff Gordon ggordon@cs.cmu.edu Carnegie

More information

ON SEPARATION PRINCIPLE FOR THE DISTRIBUTED ESTIMATION AND CONTROL OF FORMATION FLYING SPACECRAFT

ON SEPARATION PRINCIPLE FOR THE DISTRIBUTED ESTIMATION AND CONTROL OF FORMATION FLYING SPACECRAFT ON SEPARATION PRINCIPLE FOR THE DISTRIBUTED ESTIMATION AND CONTROL OF FORMATION FLYING SPACECRAFT Amir Rahmani (), Olivia Ching (2), and Luis A Rodriguez (3) ()(2)(3) University of Miami, Coral Gables,

More information

New Advances in Uncertainty Analysis and Estimation

New Advances in Uncertainty Analysis and Estimation New Advances in Uncertainty Analysis and Estimation Overview: Both sensor observation data and mathematical models are used to assist in the understanding of physical dynamic systems. However, observational

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

A Centralized Control Algorithm for Target Tracking with UAVs

A Centralized Control Algorithm for Target Tracking with UAVs A Centralized Control Algorithm for Tracing with UAVs Pengcheng Zhan, David W. Casbeer, A. Lee Swindlehurst Dept. of Elec. & Comp. Engineering, Brigham Young University, Provo, UT, USA, 8462 Telephone:

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

A new unscented Kalman filter with higher order moment-matching

A new unscented Kalman filter with higher order moment-matching A new unscented Kalman filter with higher order moment-matching KSENIA PONOMAREVA, PARESH DATE AND ZIDONG WANG Department of Mathematical Sciences, Brunel University, Uxbridge, UB8 3PH, UK. Abstract This

More information

The likelihood for a state space model

The likelihood for a state space model Biometrika (1988), 75, 1, pp. 165-9 Printed in Great Britain The likelihood for a state space model BY PIET DE JONG Faculty of Commerce and Business Administration, University of British Columbia, Vancouver,

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming Control and Cybernetics vol. 35 (2006) No. 4 State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming by Dariusz Janczak and Yuri Grishin Department

More information

A SQUARE ROOT ALGORITHM FOR SET THEORETIC STATE ESTIMATION

A SQUARE ROOT ALGORITHM FOR SET THEORETIC STATE ESTIMATION A SQUARE ROO ALGORIHM FOR SE HEOREIC SAE ESIMAION U D Hanebec Institute of Automatic Control Engineering echnische Universität München 80290 München, Germany fax: +49-89-289-28340 e-mail: UweHanebec@ieeeorg

More information