A Study of Covariances within Basic and Extended Kalman Filters

Size: px
Start display at page:

Download "A Study of Covariances within Basic and Extended Kalman Filters"

Transcription

1 A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying principles of both the basic and extended Kalman filter are discussed and the equations used to implement these filters are given. Both a linear and non-linear estimation scenario is presented. Details for modelling these scenarios and implementing a basic and extended Kalman filter in MATLAB are outlined. A basic Monte-Carlo simulation is performed for each scenario to model how the robot would move. It is shown that in the absence of measurements both the analytical covariance (Kalman equations only) and experimental covariance (Monte-Carlo simulation results) increases linearly with time. It is further shown that these two covariances match each other closely for both the linear and non-linear cases. It is also shown that the analytical covariance of the Kalman filter can converge in the linear scenario under certain conditions. Results of our simulations are plotted and interpreted. I. INTRODUCTION The Kalman filter is a popular algorithm used to estimate the current state of a given system by optimally weighting the information from the model of the system with any available measurements. Theoretically there are infinite ways to estimate the state of the system using these two pieces of information; however, the Kalman filter is unique because it minimizes the mean square error of the estimate for linear problems. Methods have been developed to extend the Kalman filter to non-linear problems by locally linearizing the problem; this modification is known as the extended Kalman filter (EKF). The Kalman filter can be broken down into two basic steps. The first step is referred to as the predict step, in which the algorithm makes a prediction of the systems current state using only information provided by the model during the previous step. This is followed by the update step, in which the algorithm uses any available measurements to modify the a priori prediction. The algorithm calculates the Kalman gain (K k ) which determines how much weight is given to the model versus how much weight is given to the measurement. All of the notation and equations used throughout this paper comes from [1]. Central to determining K k are P k and R k. P k is an estimate the covariance of the error of the state of the system at time step k and is calculated during each iteration of the algorithm; R k is the covariance of the measurement noise and is often treated as a constant R. K k then becomes a ratio of certainties: how certain we are in the estimate of the current state of our system versus how certain we are of our measurements. This paper aims to: 1) Explore how P k varies over time. 2) Explore how P k compares with an empirically derived value of the system s covariance. Throughout this paper, we will refer to P k as the analytical covariance. In order to understand how well this theoretical estimate models the underlying process, we conducted a Monte Carlo simulation of the process, performing many runs through the same Kalman filter scenario; each run uses random noise values taken from the appropriate normal distributions. The mean of these simulated runs corresponds to the estimated state of the system, ˆx k, and the covariance of the simulated states corresponds to the analytical covariance, P k. Throughout this paper, we will refer to the covariance of the simulated data as the empirical or experimental covariance. A. Notation II. BACKGROUND The Kalman filter is designed to work for the problems that can be formulated as the following linear finite difference equation x k = Ax k 1 + Bu k 1 + w k 1 where w k represents zero mean, normally distributed process noise with covariance Q. Measurements providing information about the state are represented by z k = Hx k + ν k where ν k represents zero mean, normally distributed measurement noise with covariance R. The Kalman filter consists of two, distinct steps: a predict step and an update step. Continuing from the previously defined notation and as defined in [1] the predict step is given by ˆx k = Aˆx k 1 + Bu k 1 (1) P k = AP k 1 A T + Q (2) where the superscript denotes the a priori prediction. The update step is given by B. Predict Step Derivation K k = P k H T (HP k H T + R) 1 (3) x k = x k + K k (z k Hx k ) (4) P k = (I K k H)P k (5) A full derivation of the Kalman filter will not be given here. However, the derivation of the predict step, Eqn. 1 and Eqn. 2, are given below. In the absence of additional information the expected value is the optimal estimator (in the MSE sense). Hence, ˆx k = E[x k ] = E[Ax k 1 + Bu k 1 + w k 1 ] = AE[x k 1 ] + BE[u k 1 ] + E[w k 1 ] = Aˆx k 1 + Bu k 1 If we define e to be error of our a posteriori estimate and e to be the error of our a priori estimate then it follows that and e = x k ˆx k e = x k ˆx k = A(x k 1 ˆx k 1 ) + B(u k 1 u k 1 ) + w k 1 = Ae k 1 + w k 1 1

2 2 With the assumption that the process noise w k is independent of the state x k, it follows that the a priori covariance estimate P k should be P k = E[e e T ] C. Experimental Setup = E[(Ae k 1 + w k 1 )(Ae k 1 + w k 1 ) T ] = E[Ae k 1 e T k 1A T + w k 1 w T k 1] = AP k 1 A T + Q To explore these aspects of the Kalman filter, we designed two basic estimation scenarios, one using a basic (i.e. linear) Kalman filter and one using the EKF. The basic Kalman filter scenario represents a lost robot situation; there is some uncertainty in the initial position of the robot but as more measurements are taken we become increasingly more certain of its global position. The EKF scenario is fundamentally different, in that we know the exact initial position of the robot, but become more uncertain of its position over time in the absence of measurements. D. Basic Kalman Filter Scenario The following scenario was used in implementing a basic Kalman filter. A robot is said to move in the X Y plane with the following possible input controls xstep u k = y step where x step and y step both have a value of 1 for each k. Thus, we are telling the robot to travel in a linear path, moving one positive unit in the x and y directions during each time step. The state of the robot can be expressed as x x k = y or in other words by its global x and y coordinates. The robot motion can be modeled by Eqn. 1 where A, B, and w k are 1 0 A = 1 0 B = p(w k ) = N(0, Q) where Q is the variance of the process noise of the system. In each simulation run, the true path of the robot was constructed by moving it forward one unit in both the x and y directions and then by adding white noise w k to each step. The following scheme was used to make simulated measurements. At each step along the true path, a measurement is created by adding white noise v k : p(v k ) = N(0, R) to the true position. This measurement scheme is similar to that of GPS measurements. Because the measurements are global, the H matrix is a 2x2 identity matrix. Finally, the P matrix is seeded as P 0 = Q indicating that there is some uncertainty in the initial position of the robot. Two scenarios were explored through the simulation. The first simulation scenario was run with the predict step only using Eqns. 1 and 2. After each time step, the analytical covariance given by the P matrix and the simulated covariance of the data points were plotted. The second simulation scenario was run with both the predict and update steps. The update step is calculated with Eqns. 3, 4, and 5. Due to the Gaussian noise assumption made in the derivation of the Kalman filter, the distribution of the filter s output is given by N(x k, P k ). E. Extended Kalman Filter Scenario The similar but more sophisticated scenario is used in implementing an Extended Kalman filter. A robot can move in the X Y plane with the following possible input controls Df u = θ where D f represents the distance that the robot moves forward and θ represents the change in heading angle (with positive clockwise rotations). The state of the robot can be expressed as x = x loc y loc θ where x loc and y loc are the coordinates of the vehicle in the X Y plane and θ is the robot heading relative to East (+x). The robot motion can be modelled with the following difference equation x[k] = f(x[k 1], u[k 1], w[k 1]) where w[k] represents the process noise and is also zero mean with covariance Q. The non-linear function f is approximated by f x = x + D f cos(θ + θ/2) y + D f sin(θ + θ/2) θ + δθ The Jacobian of f x w.r.t the state x (used to determine the Kalman gain in an EKF) is 1 0 D f sin(θ + θ/2) A = fx x = D f cos(θ + θ/2) 0 The Jacobian of f x w.r.t the input u (also used to determine the Kalman gain in an EFK) is cos(θ + θ/2) D f /2 sin(θ + θ/2) B = fx u = sin(θ + θ/2) D f /2 cos(θ + θ/2) A beacon location is defined in the X Y plane as xb B xy = allowing for a distance measurement at timestep k defined by z k = [x loc,k x b ] 2 + [y loc,k y b ] 2 + ν k where ν k represents the measurement noise. A series of N inputs are provided to the system, creating a theoretical path (in the absence of process noise Q). With the specified model noise added to the system, the true robot motion can be modelled. At the final timestep, N, a distance measurement is received and the EKF is updated. As with the basic Kalman Filter scenario we explore how P k varies with time and how it compares with an experimentally determined covariance. This scenario uses ideas presented in [2]. y b III. NUMERICAL RESULTS A. Basic Kalman Filter: Predict Step Only The basic Kalman filter running the predict step only was run through 10 time steps. A large process noise Q = 0.01I was used in this simulation in order to better show how the errors from the process uncertainty evolve over time. Figure 1 shows the results of this simulation. We see that the covariance of the filter output grows in a linear manner when only the predict step is run. The propagation of the error, step after step, increases our uncertainty as to the exact

3 3 position of the robot. Figure 2 plots the empirical covariance in red and the analytical covariance in blue for each time step. We see that the empirical and analytical covariances match closely and that they grow linearly over time. Thus, a Kalman filter without the update step does not force the covariance to converge, rather, the covariance will continue to grow as time progresses. Fig. 3. [Zoomed-in view of Figure 1] The filter s outputs are plotted here along with visual representations of the analytical (red, solid line) and empirical covariances (cyan dots). Although the analytical and empirical covariances have slightly different values and although their means are shifted, this figure clearly shows that covariance calculated by the Kalman filter closely matches the covariance obtained through our simulation. Fig. 1. The outputs of the predict step of a basic Kalman filter. The range of the distribution grows with each time step indicating increasing uncertainty in the robot s global position. B. Basic Kalman Filter: Predict and Update Steps After exploring the predict step, measurements are fed into the basic Kalman filter and the update step was performed. A more realistic process noise Q = 10 6 I was used. If the process noise Q is too large, the Kalman filter looses too much information each time step and will ultimately diverge. R was set to be [0.01]. Figure 4 shows the output of the basic Kalman filter over 500 runs and ten time steps. This figure is notable because it shows that the covariance of the filter output reaches a steady-state value after a small number of time steps. Indeed, the filter output appears to become more tightly distributed about the mean value after each time step. The large pink ellipse surrounding the first time step reflects the fact that we seeded the P k matrix with a large value (P 0 = I) compared to the Q and R values of the simulation. In terms of our simulation, this means that we become increasingly more certain of the robot s true global position with each additional time step, even though we were uncertain of its initial position. Fig. 2. The experimental (red) and analytical (blue) estimates of the covariance of the Kalman filter s outputs. This plot shows that the covariance grows linearly over time. Figure 3 shows a closer view of a single time step in this simulation. This closer view better shows individual filter outputs (dots) along with the visual representations of the analytical and empirical covariances. The covariances are represented by ellipses whose major and minor radii are the square root of the eigenvalues, i.e. the ellipses are 1 σ confidence intervals on the state of the system. This figure shows that the empirical and analytical covariances match very closely, though not exactly. The empirical covariance ellipses uses the empirical means as their centers whereas the analytical covariances use the perfect path points (i.e. (1, 1), (2, 2), etc.) as the centers of its covariance ellipses. Thus we see that the empirical and analytical covariances have slightly different radii and are slightly shifted with respect to each other. Despite the small differences, we can see that the covariance calculated by the Kalman filter algorithm is very similar to the empirical covariance produced by our simulations. Fig. 4. The outputs of a fully-implemented basic Kalman filter. Notice that the covariance of the filter s outputs appears to decrease over time, i.e. we become more and more certain of the robot s true position.

4 4 Figure 5 plots the analytical and empirical covariances of this second simulation. This figure shows that the covariances quickly converge to a steady-state value. In this simulation, both the analytical and empirical covariances had almost reached their steady-state value after only ten time steps. While Figure 5 shows convergence of the covariance to a steady-state value, a second kind of convergence is present; Figure 6 shows that the analytical and empirical covariances quickly converge to the same value, even when the algorithm is seeded with a large P 0 as was the case in our simulation. In this lost robot scenario, we see that the Kalman filter quickly maximizes our certainty of the robot s position, even if we were uncertain of the robot s initial position. Figure 7 provides a closer view of one of the Fig. 7. [Zoomed-in view of figure 4]The filter s outputs are plotted here along with visual representations of the analytical (pink, solid line) and experimental covariances (green dots). The covariances are represented by ellipse whose major and minor radii are the square root of the eigenvalues, i.e. the ellipses are 1 σ confidence intervals on the state of the system. This figure shows that the analytical and empirical covariances are in agreement. Fig. 5. The experimental (red) and analytical (blue) estimates of the covariance of the Kalman filter s outputs. This plot shows that the covariance of the error of the Kalman filter s outputs quickly reaches a steady-state value. noise γ k : p(γ k ) = Γ k = N(0, 0.05 [diag(u k )]), implying each input has a 5% uncertainty. Figure 8 shows the results of the EKF simulation. At each time step 500 points are plotted, each indicting a possible state given the input noise. As before, the covariance of this experimental data is plotted along side the analytical covariance for comparison purposes. Similar to the basic Kalman Filter scenario, in the absence of any measurements (and therefore update steps) the covariance grows with each time step. Figure 9 helps illustrate Fig. 6. The absolute value of the difference between the analytical and empirical covariances of the output of the basic Kalman filter. This simulation was run over 20 time steps. This plot shows that the analytical and empirical covariances quickly converge to the same value independent of the seeding of P 0. Kalman filter s time steps. The analytical and empirical covariances are also plotted. Although the covariances do not agree exactly and although their centers are shifted, this figure still shows that the analytical covariance calculated by the Kalman filter algorithm matches very closely the covariance that would be seen in a Monte Carlo simulation. C. Extended Kalman Filter For the EKF scenario we propagated the filter through 30 timesteps with 500 iterations in our Monte Carlo simulation. We defined the input to be u k = [0.02, ] T such that the robot travels 3/8 of the way around a circle. We initialized our filter such that x 0 = [0, 0, 0] T and P 0 = [0], implying 100% confidence that our initial position is at the origin. Rather than implicitly defining Q, we applied an input Fig. 8. Results of the EKF scenario. The possible states at each of the 30 time steps are represented by the alternatively colored clusters. The cyan ellipses represent the covariance at each state. The red circle at the top indicates the location of the beacon. the type of information that a beacon distance measurement provides at the final time step. Seven potential measurements are visualized by the green lines (distance between beacon and simulated robot location at time step N). Each measurement has additive Gaussian noise associated with it, represented by the red x s. Figure 10 shows the comparison between the analytical and experimental covariance prior to the update (P k ) as well as the analytical and experimental covariance after the update step (P k ). It can clearly be seen that both the a priori and a posteriori analytical covariance determined by the

5 5 Fig. 9. Measurement Illustration. As before, the experimental state of the filter (blue dots), analytical and experimental covariances (cyan, thin and thick), and the beacon location (red circle) are plotted. Seven potential measurements are visualized by the green lines. Each measurement has additive Gaussian noise associated with it, represented by the red x s. EKF do an excellent job at estimating the underlying covariances. As expected, the update step reduces the uncertainty in the direction of the measurement while not adjusting the uncertainty in the direction perpendicular to the measurement. predicts what the covariance will be over many simulated runs. Fourth, the analytical and empirical covariances will quickly converge to the same value if P k is seeded as a large value. In terms of our lost robot and wandering robot scenarios, we can draw the following conclusions. First, by taking into account measurement information, a Kalman filter can quickly hone in on the system s true state (within a degree of uncertainty) even if the system s initial state is unknown. Second, as an unmeasured system evolves over time, the uncertainty surrounding its current state will continue to increase. However, if a measurement is taken at some arbitrary time in the future, the Kalman filter can use that single measurement to significantly reduce the covariance of the state estimate. This project could be extended in several different ways. One such direction would be to study how changing the R value (the covariance of the measurement noise) affects the accuracy of the filter outputs. If modifying the R value increases the accuracy of the filter, we could research different methods of dynamically assigning R values depending on the environment of our system. For example, if an unmanned air vehicle (UAV) carried an array of sensors, it might have limited information on how well each sensor performs in different environments. If the UAV enters an environment for which it does not have information about how one of its sensors should perform, it could compare that sensor s output to the output of the Kalman filter (which would utilize all of the other sensors measurements) and dynamically adjust that sensor s R value depending on how closely its measurement matched the output of the Kalman filter. A thorough understanding of the underlying stochastic properties of the Kalman filter, as outlined in this paper, is critical for exploration in this exciting field. REFERENCES [1] G. Welch and G. Bishop. (2006, July 24). An Introduction to the Kalman Filter [Online] Available: welch/media/pdf/kalman_intro.pdf [2] E. Kiriy and M. Buehler. (2002, April 12). Three-state Extended Kalman Filter for Mobile Robot Localization [Online] Available: filtering/ekf-3state.pdf Fig. 10. The a posteriori covariances (analytical and experimental) of the final time step are plotted in magenta (thin and thick). As before the a priori covariances (analytical and experimental) of the final time step are plotted in cyan (thin and thick). IV. CONCLUSIONS We were able to make the following observations and conclusions as a result of our research. First, during the predict step, the propagation of process noise results in linearly growing covariances and increasingly larger uncertainties about the current state of the system. Second, under certain conditions a Kalman filter will cause the covariance to converge to a steady-state value 1. Third, the analytical covariance calculated by the Kalman filter algorithm accurately 1 The error distribution outlined in our scenarios provide an example of a convergent case. If the process noise is too great, or the measurement do not provide enough information, the solution will diverge. Defining the limits for convergence is outside the scope of this paper

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

the robot in its current estimated position and orientation (also include a point at the reference point of the robot) CSCI 4190 Introduction to Robotic Algorithms, Spring 006 Assignment : out February 13, due February 3 and March Localization and the extended Kalman filter In this assignment, you will write a program

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft 1 Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft K. Meier and A. Desai Abstract Using sensors that only measure the bearing angle and range of an aircraft, a Kalman filter is implemented

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

A Crash Course on Kalman Filtering

A Crash Course on Kalman Filtering A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1 / 64 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown

More information

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Probabilistic Map-based Localization (Kalman Filter) Chapter 5 (textbook) Based on textbook

More information

E190Q Lecture 11 Autonomous Robot Navigation

E190Q Lecture 11 Autonomous Robot Navigation E190Q Lecture 11 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 013 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

1 Introduction ISSN

1 Introduction ISSN Techset Composition Ltd, Salisbury Doc: {IEE}CTA/Articles/Pagination/CTA58454.3d www.ietdl.org Published in IET Control Theory and Applications Received on 15th January 2009 Revised on 18th May 2009 ISSN

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1 Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters Lecturer: Drew Bagnell Scribe:Greydon Foil 1 1 Gauss Markov Model Consider X 1, X 2,...X t, X t+1 to be

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

VEHICLE WHEEL-GROUND CONTACT ANGLE ESTIMATION: WITH APPLICATION TO MOBILE ROBOT TRACTION CONTROL

VEHICLE WHEEL-GROUND CONTACT ANGLE ESTIMATION: WITH APPLICATION TO MOBILE ROBOT TRACTION CONTROL 1/10 IAGNEMMA AND DUBOWSKY VEHICLE WHEEL-GROUND CONTACT ANGLE ESTIMATION: WITH APPLICATION TO MOBILE ROBOT TRACTION CONTROL K. IAGNEMMA S. DUBOWSKY Massachusetts Institute of Technology, Cambridge, MA

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

Lecture 4: Extended Kalman filter and Statistically Linearized Filter

Lecture 4: Extended Kalman filter and Statistically Linearized Filter Lecture 4: Extended Kalman filter and Statistically Linearized Filter Department of Biomedical Engineering and Computational Science Aalto University February 17, 2011 Contents 1 Overview of EKF 2 Linear

More information

A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition

A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition Junfeng Wu, Karl Henrik Johansson and Ling Shi E-mail: jfwu@ust.hk Stockholm, 9th, January 2014 1 / 19 Outline Outline

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

An Introduction to the Kalman Filter

An Introduction to the Kalman Filter An Introduction to the Kalman Filter by Greg Welch 1 and Gary Bishop 2 Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 275993175 Abstract In 1960, R.E. Kalman

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017 EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science.

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science. STC Lecture Series An Introduction to the Kalman Filter Greg Welch and Gary Bishop University of North Carolina at Chapel Hill Department of Computer Science http://www.cs.unc.edu/~welch/kalmanlinks.html

More information

A Theoretical Overview on Kalman Filtering

A Theoretical Overview on Kalman Filtering A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering

More information

LARGE-SCALE TRAFFIC STATE ESTIMATION

LARGE-SCALE TRAFFIC STATE ESTIMATION Hans van Lint, Yufei Yuan & Friso Scholten A localized deterministic Ensemble Kalman Filter LARGE-SCALE TRAFFIC STATE ESTIMATION CONTENTS Intro: need for large-scale traffic state estimation Some Kalman

More information

Joint GPS and Vision Estimation Using an Adaptive Filter

Joint GPS and Vision Estimation Using an Adaptive Filter 1 Joint GPS and Vision Estimation Using an Adaptive Filter Shubhendra Vikram Singh Chauhan and Grace Xingxin Gao, University of Illinois at Urbana-Champaign Shubhendra Vikram Singh Chauhan received his

More information

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 Prof. Steven Waslander p(a): Probability that A is true 0 pa ( ) 1 p( True) 1, p( False) 0 p( A B) p( A) p( B) p( A B) A A B B 2 Discrete Random Variable X

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal

More information

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline

More information

Nonlinear State Estimation! Particle, Sigma-Points Filters!

Nonlinear State Estimation! Particle, Sigma-Points Filters! Nonlinear State Estimation! Particle, Sigma-Points Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Particle filter!! Sigma-Points Unscented Kalman ) filter!!

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Uncertainty representation Localization Chapter 5 (textbook) What is the course about?

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Data Fusion Kalman Filtering Self Localization

Data Fusion Kalman Filtering Self Localization Data Fusion Kalman Filtering Self Localization Armando Jorge Sousa http://www.fe.up.pt/asousa asousa@fe.up.pt Faculty of Engineering, University of Porto, Portugal Department of Electrical and Computer

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state

More information

Localización Dinámica de Robots Móviles Basada en Filtrado de Kalman y Triangulación

Localización Dinámica de Robots Móviles Basada en Filtrado de Kalman y Triangulación Universidad Pública de Navarra 13 de Noviembre de 2008 Departamento de Ingeniería Mecánica, Energética y de Materiales Localización Dinámica de Robots Móviles Basada en Filtrado de Kalman y Triangulación

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

State Estimation using Gaussian Process Regression for Colored Noise Systems

State Estimation using Gaussian Process Regression for Colored Noise Systems State Estimation using Gaussian Process Regression for Colored Noise Systems Kyuman Lee School of Aerospace Engineering Georgia Institute of Technology Atlanta, GA 30332 404-422-3697 lee400@gatech.edu

More information

UAV Navigation: Airborne Inertial SLAM

UAV Navigation: Airborne Inertial SLAM Introduction UAV Navigation: Airborne Inertial SLAM Jonghyuk Kim Faculty of Engineering and Information Technology Australian National University, Australia Salah Sukkarieh ARC Centre of Excellence in

More information

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations SEBASTIÁN OSSANDÓN Pontificia Universidad Católica de Valparaíso Instituto de Matemáticas Blanco Viel 596, Cerro Barón,

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements

Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements Seminar on Mechanical Robotic Systems Centre for Intelligent Machines McGill University Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements Josep M. Font Llagunes

More information

CSE 483: Mobile Robotics. Extended Kalman filter for localization(worked out example)

CSE 483: Mobile Robotics. Extended Kalman filter for localization(worked out example) DRAFT a final version will be posted shortly CSE 483: Mobile Robotics Lecture by: Prof. K. Madhava Krishna Lecture # 4 Scribe: Dhaivat Bhatt, Isha Dua Date: 14th November, 216 (Monday) Extended Kalman

More information

Conditions for successful data assimilation

Conditions for successful data assimilation Conditions for successful data assimilation Matthias Morzfeld *,**, Alexandre J. Chorin *,**, Peter Bickel # * Department of Mathematics University of California, Berkeley ** Lawrence Berkeley National

More information

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics Sensitivity Analysis of Disturbance Accommodating Control with Kalman Filter Estimation Jemin George and John L. Crassidis University at Buffalo, State University of New York, Amherst, NY, 14-44 The design

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets J. Clayton Kerce a, George C. Brown a, and David F. Hardiman b a Georgia Tech Research Institute, Georgia Institute of Technology,

More information

Lecture 5: Control Over Lossy Networks

Lecture 5: Control Over Lossy Networks Lecture 5: Control Over Lossy Networks Yilin Mo July 2, 2015 1 Classical LQG Control The system: x k+1 = Ax k + Bu k + w k, y k = Cx k + v k x 0 N (0, Σ), w k N (0, Q), v k N (0, R). Information available

More information

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS (Preprint) AAS 12-202 RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS Hemanshu Patel 1, T. Alan Lovell 2, Ryan Russell 3, Andrew Sinclair 4 "Relative navigation using

More information

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Introduction SLAM asks the following question: Is it possible for an autonomous vehicle

More information

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand

More information

UAVBook Supplement Full State Direct and Indirect EKF

UAVBook Supplement Full State Direct and Indirect EKF UAVBook Supplement Full State Direct and Indirect EKF Randal W. Beard March 14, 217 This supplement will explore alternatives to the state estimation scheme presented in the book. In particular, we will

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering Statistical Filtering and Control for AI and Robotics Part II. Linear methods for regression & Kalman filtering Riccardo Muradore 1 / 66 Outline Linear Methods for Regression Gaussian filter Stochastic

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment Applications Local alignment: Tracking Stereo Global alignment: Camera jitter elimination Image enhancement Panoramic

More information

Extended Kalman Filter Tutorial

Extended Kalman Filter Tutorial Extended Kalman Filter Tutorial Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Dynamic process Consider the following

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

Sensors Fusion for Mobile Robotics localization. M. De Cecco - Robotics Perception and Action

Sensors Fusion for Mobile Robotics localization. M. De Cecco - Robotics Perception and Action Sensors Fusion for Mobile Robotics localization 1 Until now we ve presented the main principles and features of incremental and absolute (environment referred localization systems) could you summarize

More information

Variational Autoencoders

Variational Autoencoders Variational Autoencoders Recap: Story so far A classification MLP actually comprises two components A feature extraction network that converts the inputs into linearly separable features Or nearly linearly

More information

Least squares: introduction to the network adjustment

Least squares: introduction to the network adjustment Least squares: introduction to the network adjustment Experimental evidence and consequences Observations of the same quantity that have been performed at the highest possible accuracy provide different

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Distributed estimation in sensor networks

Distributed estimation in sensor networks in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

The Scaled Unscented Transformation

The Scaled Unscented Transformation The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented

More information

Lego NXT: Navigation and localization using infrared distance sensors and Extended Kalman Filter. Miguel Pinto, A. Paulo Moreira, Aníbal Matos

Lego NXT: Navigation and localization using infrared distance sensors and Extended Kalman Filter. Miguel Pinto, A. Paulo Moreira, Aníbal Matos Lego NXT: Navigation and localization using infrared distance sensors and Extended Kalman Filter Miguel Pinto, A. Paulo Moreira, Aníbal Matos 1 Resume LegoFeup Localization Real and simulated scenarios

More information

Quantitative Trendspotting. Rex Yuxing Du and Wagner A. Kamakura. Web Appendix A Inferring and Projecting the Latent Dynamic Factors

Quantitative Trendspotting. Rex Yuxing Du and Wagner A. Kamakura. Web Appendix A Inferring and Projecting the Latent Dynamic Factors 1 Quantitative Trendspotting Rex Yuxing Du and Wagner A. Kamakura Web Appendix A Inferring and Projecting the Latent Dynamic Factors The procedure for inferring the latent state variables (i.e., [ ] ),

More information

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties Chapter 6: Steady-State Data with Model Uncertainties CHAPTER 6 Steady-State Data with Model Uncertainties 6.1 Models with Uncertainties In the previous chapters, the models employed in the DR were considered

More information

Lecture 3: Statistical sampling uncertainty

Lecture 3: Statistical sampling uncertainty Lecture 3: Statistical sampling uncertainty c Christopher S. Bretherton Winter 2015 3.1 Central limit theorem (CLT) Let X 1,..., X N be a sequence of N independent identically-distributed (IID) random

More information

Partially Observable Markov Decision Processes (POMDPs)

Partially Observable Markov Decision Processes (POMDPs) Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms L03. PROBABILITY REVIEW II COVARIANCE PROJECTION NA568 Mobile Robotics: Methods & Algorithms Today s Agenda State Representation and Uncertainty Multivariate Gaussian Covariance Projection Probabilistic

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

B4 Estimation and Inference

B4 Estimation and Inference B4 Estimation and Inference 6 Lectures Hilary Term 27 2 Tutorial Sheets A. Zisserman Overview Lectures 1 & 2: Introduction sensors, and basics of probability density functions for representing sensor error

More information

Evaluation of different wind estimation methods in flight tests with a fixed-wing UAV

Evaluation of different wind estimation methods in flight tests with a fixed-wing UAV Evaluation of different wind estimation methods in flight tests with a fixed-wing UAV Julian Sören Lorenz February 5, 2018 Contents 1 Glossary 2 2 Introduction 3 3 Tested algorithms 3 3.1 Unfiltered Method

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Unscented Transformation of Vehicle States in SLAM

Unscented Transformation of Vehicle States in SLAM Unscented Transformation of Vehicle States in SLAM Juan Andrade-Cetto, Teresa Vidal-Calleja, and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona,

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Error reduction in GPS datum conversion using Kalman filter in diverse scenarios Swapna Raghunath 1, Malleswari B.L 2, Karnam Sridhar 3

Error reduction in GPS datum conversion using Kalman filter in diverse scenarios Swapna Raghunath 1, Malleswari B.L 2, Karnam Sridhar 3 INTERNATIONAL JOURNAL OF GEOMATICS AND GEOSCIENCES Volume 3, No 3, 2013 Copyright by the authors - Licensee IPA- Under Creative Commons license 3.0 Research article ISSN 0976 4380 Error reduction in GPS

More information

Robot Localization and Kalman Filters

Robot Localization and Kalman Filters Robot Localization and Kalman Filters Rudy Negenborn rudy@negenborn.net August 26, 2003 Outline Robot Localization Probabilistic Localization Kalman Filters Kalman Localization Kalman Localization with

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Modern Navigation. Thomas Herring

Modern Navigation. Thomas Herring 12.215 Modern Navigation Thomas Herring Estimation methods Review of last class Restrict to basically linear estimation problems (also non-linear problems that are nearly linear) Restrict to parametric,

More information

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation

More information

Battery Level Estimation of Mobile Agents Under Communication Constraints

Battery Level Estimation of Mobile Agents Under Communication Constraints Battery Level Estimation of Mobile Agents Under Communication Constraints Jonghoek Kim, Fumin Zhang, and Magnus Egerstedt Electrical and Computer Engineering, Georgia Institute of Technology, USA jkim37@gatech.edu,fumin,

More information

VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION

VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION Michael Döhler 1, Palle Andersen 2, Laurent Mevel 1 1 Inria/IFSTTAR, I4S, Rennes, France, {michaeldoehler, laurentmevel}@inriafr

More information

Integration of a strapdown gravimeter system in an Autonomous Underwater Vehicle

Integration of a strapdown gravimeter system in an Autonomous Underwater Vehicle Integration of a strapdown gravimeter system in an Autonomous Underwater Vehicle Clément ROUSSEL PhD - Student (L2G - Le Mans - FRANCE) April 17, 2015 Clément ROUSSEL ISPRS / CIPA Workshop April 17, 2015

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information