Using the Kalman Filter for SLAM AIMS 2015

Similar documents
L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

UAV Navigation: Airborne Inertial SLAM

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robot Localisation. Henrik I. Christensen. January 12, 2007

On the Treatment of Relative-Pose Measurements for Mobile Robot Localization

(W: 12:05-1:50, 50-N201)

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

COS Lecture 16 Autonomous Robot Navigation

Simultaneous Localization and Map Building Using Natural features in Outdoor Environments

Exact State and Covariance Sub-matrix Recovery for Submap Based Sparse EIF SLAM Algorithm

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Autonomous Navigation for Flying Robots

From Bayes to Extended Kalman Filter

Sensors Fusion for Mobile Robotics localization. M. De Cecco - Robotics Perception and Action

CS 532: 3D Computer Vision 6 th Set of Notes

E190Q Lecture 11 Autonomous Robot Navigation

Autonomous Mobile Robot Design

Mobile Robot Localization

TSRT14: Sensor Fusion Lecture 9

CS491/691: Introduction to Aerial Robotics

Robotics 2 Target Tracking. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Mobile Robot Localization

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

Hidden Markov Models (recap BNs)

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Multi-Sensor Fusion for Localization of a Mobile Robot in Outdoor Environments

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

CSE 483: Mobile Robotics. Extended Kalman filter for localization(worked out example)

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Lecture 9. Time series prediction

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Probabilistic Fundamentals in Robotics

Amortized Constant Time State Estimation in SLAM using a Mixed Kalman-Information Filter

Fast Incremental Square Root Information Smoothing

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Scalable Sparsification for Efficient Decision Making Under Uncertainty in High Dimensional State Spaces

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

SC-KF Mobile Robot Localization: A Stochastic-Cloning Kalman Filter for Processing Relative-State Measurements

Using covariance intersection for SLAM

Isobath following using an altimeter as a unique exteroceptive sensor

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Simultaneous Localisation and Mapping in Dynamic Environments (SLAMIDE) with Reversible Data Association

Lego NXT: Navigation and localization using infrared distance sensors and Extended Kalman Filter. Miguel Pinto, A. Paulo Moreira, Aníbal Matos

SC-KF Mobile Robot Localization: A Stochastic Cloning-Kalman Filter for Processing Relative-State Measurements

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Introduction to Mobile Robotics Probabilistic Robotics

Lecture 2: From Linear Regression to Kalman Filter and Beyond

A Study of Covariances within Basic and Extended Kalman Filters

Gaussian Processes for Sequential Prediction

A First-Estimates Jacobian EKF for Improving SLAM Consistency

Simultaneous Localization and Mapping

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

15-780: LinearProgramming

1 Kalman Filter Introduction

2D Image Processing (Extended) Kalman and particle filter

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

Control of Mobile Robots

2D Image Processing. Bayes filter implementation: Kalman filter

CDS 270-2: Lecture 6-1 Towards a Packet-based Control Theory

CS 188: Artificial Intelligence Spring Announcements

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Stochastic Models in Robotics

COM336: Neural Computing

Analysis and Improvement of the Consistency of Extended Kalman Filter based SLAM

Application of Data Fusion to Aerial Robotics. Paul Riseborough March 24, 2015

Linear Dynamical Systems

9 Multi-Model State Estimation

Sensor Fusion: Particle Filter

Großer Beleg. Real-Time Structure from Motion Using Kalman Filtering

A Provably Consistent Method for Imposing Sparsity in Feature-based SLAM Information Filters

Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

CSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet

Simultaneous Mapping and Localization With Sparse Extended Information Filters: Theory and Initial Results

Probabilistic Robotics

Decision Theory: Markov Decision Processes

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Vision for Mobile Robot Navigation: A Survey

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

CS 188: Artificial Intelligence Spring 2009

Intro to Probability. Andrei Barbu

Transcription:

Using the Kalman Filter for SLAM AIMS 2015

Contents Trivial Kinematics Rapid sweep over localisation and mapping (components of SLAM) Basic EKF Feature Based SLAM Feature types and representations Implementation Convergence Properties Pose Based SLAM Sparse Inverse Pose Based Formulation

Vehicle Models - Prediction Instantaneous Center of Rotation φ φ a V θ control Truth model (x,y) L Global Reference Frame

Noise is in control.

Effect of control noise on uncertainty:

Using Dead-Reckoned Data

Navigation Architecture Dead Reckoned output (drifts badly) Physics (to be estimated) required (corrected) navigation estimates Hidden Control Physics Other Measurements Encoder Counts DR Model Nav Supplied Onboard System integration Navigation (to be installed by us)

Background T-Composition 2 3 X 2,3 1 X 1,2 X 1,3 Compounding transformations

Just functions!

Deduce an Incremental Move These can be in massive error x o (k) u(k) x o (k-1) Global Frame But the common error is subtracted out here

Use this move as a control Substitution into Prediction equation (using J1 and J2 as Jacobians): Diagonal covariance matrix (3x3) of error in u o

Feature Based Mapping and Navigation Look at the code!!

Problem Space

Landmarks / Features Things that standout to a sensor: Corners, windows, walls, bright patches, texture Map Point Feature called i We shall initially proceed by considering only point features We ll discuss other feature types later

Three Inference Problem Input is measurements conditioned on map and vehicle Data: We want to use Bayes rule to invert this and get maps and vehicles given measurements. Problem 1 inferring location given a map (easy) Problem 2 inferring a map given a location (easy) Problem 3 SLAM learning a map and locating simultaneously We can use a KF for all the above

Simple Motion Model Plant Model Remember: u is control, J s are a fancy way of writing jacobians (composition operator). U is strength of noise in plant model.

Observation Model r

Example No features seen here Example Code Provided

Location Covariance

Location Innovation

Problem II - Mapping Key Point: State Vector GROWS! New, bigger map Old map Obs of new feature State augmentation The state vector is the map

How is P augmented? Simple! Use the transformation of covariance rule.. G is the feature initialisation function

Leading to : Angle from Veh to feature Vehicle orientation

So what are models h and f? h is a function of the feature being observed: f is simply the identity transformation :

Turn the handle on the EKF: All hail the Oracle! How do we know what feature we are observing?

Problem III SLAM Build a map and use it at the same time This a cornerstone of autonomy

Basic S.S.C SLAM A union of Localisation and Mapping Vehicle parameterisation e.g x,y,theta Feature parameterisations e.g x,y for points State vector has vehicle AND map Why naïve? Computation!

Prediction: Vehicle Moves Map remains unchanged We have all the terms needed for prediction step

Feature Initialisation: These two lines are g() This is y(.) This a step only runs if a new feature is seen

Observation Model r We have all the terms needed for update step

ASYNCHRONOUS DATA AQUISITION Data Collection SYNCHRONOUS PROCESSING START State Projection new data available Perceptual Grouping Any applicable technique can be applied here, for example Hough transform, RANSAC, least medians, maximum likely-hood new and existing unexplained data is combined with a history of vehicle states to search for a stable initialisation of a new feature raw sensor data grouped obsevations Data Storage Individual Observations positive associations Map Update DATA ASSOCIATION unexplained observations Feature Manufacture no new data stable initialisation ambiguity Batch Update Delayed State Management all newly explained observations are used during initialisation Feature Management new feature Addition and removal of past vehicle states Features can be fused with each other (equivalence assertion). Stagnant features can be removed (garbage collection). Compound features can be built. EXIT

Feature Types In theory we can use any geometric parameterisation In practice you need to be very careful about a choice of parameterisation. If ρ is large small errors in θ do crazy things! Feature Frame World Much smarter to use SPMap (Tardos) which represents geometry and uncertainty in frames centred on a feature. Also see Kai Arras s PhD as an excellent SPMap resource

The SPMap Closed Loop of Transformations This is the observation model for every feature type!

Symmetry Selection Measurement to Feature Transformation measurement robot World

Calculating EKF Jacobians Common term X fe : how does Observation appear in feature frame?

SLAM in action SPMap Formulation

Human Driven Exploration

Navigating

Autonomous Homing

Using the Map Homing Homing Final Adjustment Get Insurance.

Atlas Extending the EKF Using Multiple Frame Mapping MIT s Infinite Corridor 2.2 km driven path In collaboration with M. Bosse (PhD Thesis) Prof. J. Leonard and Prof. S. Teller (MIT)

The Convergence and Stability of SLAM By analysing the behaviour of the LG-KF we can learn about the governing properties of the SLAM problem which are actually completely intuitive.

We can show that: The determinant of any submatrix of the map covariance matrix decreases monotonically as observations are successively made. In the limit as the number of observations increases, the landmark estimates become fully correlated. In the limit, the covariance associated with any single landmark location estimate is determined only by the initial covariance in the vehicle location estimate.

Proofs Condensed (9)

To be Tattooed on Inside of Eyelids The entire structure of the SLAM problem critically depends on maintaining complete knowledge of the cross correlation between landmark estimates. Minimizing or ignoring cross correlations is precisely contrary to the structure of the problem. As the vehicle progresses through the environment the errors in the estimates of any pair of landmarks become more and more correlated, and indeed never become less correlated. In the limit, the errors in the estimates of any pair of landmarks becomes fully correlated. This means that given the exact location of any one landmark, the location of any other landmark in the map can also be determined with absolute certainty. As the vehicle moves through the environment taking observations of individual landmarks, the error in the estimates of the relative location between different landmarks reduces monotonically to the point where the map of relative locations is known with absolute precision. As the map converges in the above manner, the error in the absolute location of every landmark (and thus the whole map) reaches a lower bound determined only by the error that existed when the first observation was made. (We didn t prove this here. However it is an excellent test for consistency in new SLAM algorithms) This is all under the assumption that we observe all features equally often for other cases see Kim Sun-Joon PhD MIT 2004

Pose Based SLAM Control Past poses Scan matching between past poses produces observation with which to update state-vector Growth with time State vector contains only past vehicle poses. Each pose has a scan rigidly attached to it

Pose Based Prediction Step Past poses Note f() changes the size of the state vector! Plant model (f) and Jacobian are all we need to plug into EKF prediction equations

Pose Based Update Step x v [j] z i,j x v [i] Observation model (h) and Jacobian are all we need to plug into EKF update equations RUN LIVE DEMO HERE

Demonstration 3D Pose Based SLAM

Issues with Single Frame SLAM It is uni-modal. It cannot cope with ambiguous situations It is inconsistent - the linearisations lead to errors which underestimate the covariance of the underlying pdf It is fragile - if the estimated is in error the linearisation is very poor disaster. But the biggest problem is (was) scaling Much progress has been made on scaling see upcoming talks. In particular FastSLAM Montemerlo et. al Atlas(Bosse et al) Postponement (Davison, Nebot) CTS (Leonard Newman) Exactly Sparse Delayed State Filter (Eustice et. Al (see third practical) )

And Finally: turn it all on its Head the information form Information Matrix is inverse of covariance (makes sense!) We can propagate information matrix an information vector instead of state and covariance

Look Again at Augmentation Matrix inversion lemma Note the zeros in the information matrix that don t appear in the EKF version..

So What? 0 50 100 The information matrix remains band-diagonal when augmenting! -> constant time. (compare with EKF which is linear with state size) 150 0 50 100 150 nz = 920 Why? Because of the markov property of the plant model we only need the previous state to predict a new pose. There is much that can be said about the structure of the information matrix and the underlying pd.f but time is short.

And there s more + = It appears then that the update is constant time..

But There s No Free Lunch More details in Eustice et al s ICRA 2005 paper Relates strongly to contemporary graphical-slam methods (see upcoming talks) RUN LIVE DEMO HERE

What remains under the carpet? Data Association (what am I looking at) Loop Closing (was I really that lost?) Consistency and Robustness Scaling Relationship to large scale optimisation These are meaty topics and will be the focus of other summer school lectures. Many thanks for your time