EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Similar documents
Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

2D Image Processing (Extended) Kalman and particle filter

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Mobile Robot Localization

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Mobile Robot Localization

Autonomous Navigation for Flying Robots

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

2D Image Processing. Bayes filter implementation: Kalman filter

Robot Localization and Kalman Filters

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

2D Image Processing. Bayes filter implementation: Kalman filter

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Simultaneous Localization and Mapping

AUTOMOTIVE ENVIRONMENT SENSORS

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Introduction to Unscented Kalman Filter

Autonomous Mobile Robot Design

CS491/691: Introduction to Aerial Robotics

Using the Kalman Filter for SLAM AIMS 2015

Introduction to Mobile Robotics Probabilistic Robotics

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

TSRT14: Sensor Fusion Lecture 9

Introduction to Mobile Robotics Bayes Filter Kalman Filter

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Robot Localisation. Henrik I. Christensen. January 12, 2007

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

TSRT14: Sensor Fusion Lecture 8

1 Kalman Filter Introduction

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Lecture 8: Interest Point Detection. Saad J Bedros

Dynamic models 1 Kalman filters, linearization,

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

9 Multi-Model State Estimation

Autonomous Mobile Robot Design

Particle Filters. Outline

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

From Bayes to Extended Kalman Filter

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

COS Lecture 16 Autonomous Robot Navigation

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Sensor Fusion: Particle Filter

Robotics 2 Target Tracking. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Partially Observable Markov Decision Processes (POMDPs)

E190Q Lecture 10 Autonomous Robot Navigation

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

Recursive Bayes Filtering

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

CS 532: 3D Computer Vision 6 th Set of Notes

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Mobile Robotics II: Simultaneous localization and mapping

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Lecture 2: From Linear Regression to Kalman Filter and Beyond

A Study of Covariances within Basic and Extended Kalman Filters

Overfitting, Bias / Variance Analysis

Lecture 7: Optimal Smoothing

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

Local Positioning with Parallelepiped Moving Grid

System identification and sensor fusion in dynamical systems. Thomas Schön Division of Systems and Control, Uppsala University, Sweden.

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Lecture 8: Interest Point Detection. Saad J Bedros

Probabilistic Graphical Models

Human Pose Tracking I: Basics. David Fleet University of Toronto

Feature extraction: Corners and blobs

Ensemble Data Assimilation and Uncertainty Quantification

On the Representation and Estimation of Spatial Uncertainty

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements

CONDENSATION Conditional Density Propagation for Visual Tracking

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

(W: 12:05-1:50, 50-N201)

Cooperative relative robot localization with audible acoustic sensing

Data assimilation in high dimensions

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter

Probabilistic Robotics

Constrained State Estimation Using the Unscented Kalman Filter

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

Transcription:

EKF and SLAM McGill COMP 765 Sept 18 th, 2017

Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system: visual structure from motion

Next 3 Lectures Sept 21 st Normal room Lead by Florian and Travis Required reading (at least quickly) the 2 papers listed on the list Sept 26 th and 28 th MC 103 Joint with COMP 417 Lead by Jose Ruis, Professor in intelligent control from Mexico

Paper Presentations Presenters: Read the paper carefully Prepare roughly 15 minutes of speaking. Can include showing the paper pdf and using their figures, probably better to show some powerpoint summary, say things in your own words Goal is to identify key points of the paper, teach the class something Goal is not to be a complete expert. It s OK to raise confusing points. Remember, all research papers are imperfect! Audience: Read the papers ahead, come up with questions on the spot. Fill out feedback forms afterwards.

Assumptions guarantee that if the prior belief before the prediction step is Gaussian Kalman Filter: an instance of Bayes Filter then the prior belief after the prediction step will be Gaussian and the posterior belief (after the update step) will be Gaussian. Linear observations with Gaussian noise Linear dynamics with Gaussian noise Initial belief is Gaussian

Assumptions guarantee that if the prior belief before the prediction step is Gaussian Kalman Filter: an instance of Bayes Filter then the prior belief after the prediction step will be Gaussian Suppose you replace the linear models with nonlinear models. and the posterior belief (after the update step) will be Gaussian. Does the posterior Gaussian? remain Linear observations with Gaussian noise Linear dynamics with Gaussian noise Initial belief is Gaussian

Assumptions guarantee that if the prior belief before the prediction step is Gaussian Kalman Filter: an instance of Bayes Filter then the prior belief after the prediction step will be Gaussian Suppose you replace the linear models with nonlinear models. and the posterior belief (after the update step) will be Gaussian. Does the posterior Gaussian? NO remain Linear observations with Gaussian noise Linear dynamics with Gaussian noise Initial belief is Gaussian

Linearity Assumption Revisited If and then 8

Nonlinear Function If and then y is not necessarily distributed as a Gaussian. 9

Nonlinear Function If and then y is not necessarily distributed as a Gaussian. How can we approximate p(y) using a single Gaussian, without having a formula for p(y)? IDEA #1: MONTE CARLO SAMPLING - Draw many (e.g. 10^6) samples xi ~ p(x) - Pass them through the nonlinear function yi = g(xi) - Compute the empirical mean (m) and covariance (S) of the samples yi - Return Normal(m, S) IDEA #2: LINEARIZE THE NONLINEAR FUNCTIONS f, h - Then we are in the case y = Gx +c, so p(y) is a Gaussian 10

Nonlinear Function This is how we computed the Gaussian approximation of p(y). If and then y is not necessarily distributed as a Gaussian. How can we approximate p(y) using a single Gaussian, without having a formula for p(y)? IDEA #1: MONTE CARLO SAMPLING - Draw many (e.g. 10^6) samples xi ~ p(x) - Pass them through the nonlinear function yi = g(xi) - Compute the empirical mean (m) and covariance (S) of the samples yi - Return Normal(m, S) IDEA #2: LINEARIZE THE NONLINEAR FUNCTIONS f, h - Then we are in the case y = Gx +c, so p(y) is a Gaussian 11

Nonlinear Function Bad idea to use in Kalman Filter because we want to approximate p(y) efficiently. If and then y is not necessarily distributed as a Gaussian. How can we approximate p(y) using a single Gaussian, without having a formula for p(y)? IDEA #1: MONTE CARLO SAMPLING - Draw many (e.g. 10^6) samples xi ~ p(x) - Pass them through the nonlinear function yi = g(xi) - Compute the empirical mean (m) and covariance (S) of the samples yi - Return Normal(m, S) IDEA #2: LINEARIZE THE NONLINEAR FUNCTIONS f, h - Then we are in the case y = Gx +c, so p(y) is a Gaussian 12

Linearization Notice how the Linearization approximation differs from the Monte Carlo approximation (which is better, provided sufficiently many samples). That said, the Linearization approximation can be computed efficiently, and can be integrated into the Kalman Filter estimator Extended Kalman Filter 13

Linearization with low approximation error The quality of the linearization approximation depends on the uncertainty of p(x) but also on the shape of the nonlinearity g(x). In this example p(x) has small variance so most points will be concentrated around 0.5 and will pass through a very small region of g(x), where g(x) is close to linear. In this case p(y) is nearly Gaussian, and the Linearization approximation matches the Monte Carlo approximation. 14

Linearization with high approximation error The quality of the linearization approximation depends on the uncertainty of p(x) but also on the shape of the nonlinearity g(x). In this example p(x) has high variance so points g(x) will be spread out around g(0.5), where g(x) is not close to linear. In this case p(y) is multimodal, and the Linearization approximation matches the Monte Carlo approximation are both suboptimal approximations. Again Monte Carlo is better, provided sufficient samples. 15

How do we linearize? Using the first order Taylor expansion around the mean of the previous update step s state estimate: Recall how to compute the Jacobian matrix. For example, if then the Jacobian of f with respect to at is Constant term with respect to the state

How do we linearize? Using the first-order Taylor expansion around the mean of the previous prediction step s state estimate: Recall how to compute the Jacobian matrix. For example, if then the Jacobian of f with respect to at is Constant term with respect to the state

Assumptions guarantee that if the prior belief before the prediction step is Gaussian Extended Kalman Filter: an instance of Bayes Filter then the prior belief after the prediction step will be Gaussian and the posterior belief (after the update step) will be Gaussian. Linear observations with Gaussian noise Linear dynamics with Gaussian noise Initial belief is Gaussian

Dynamics Measurements EKF in N dimensions Init Prediction Step Update Step Received measurement but expected to receive Prediction residual is a Gaussian random variable where the covariance of the residual is Kalman Gain (optimal correction factor):

EKF Summary As in KF, inverting the covariance of the residual is O(k^2.376) Efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k 2.376 + n 2 ) Not optimal (unlike the Kalman Filter for linear systems) Can diverge if nonlinearities are large Works surprisingly well even when all assumptions are violated 20

Example #1: EKF-Localization Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities. [Cox 91] Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot s position. Problem classes Position tracking Global localization Kidnapped robot problem (recovery) 21

Landmark-based Localization Landmarks, whose position in the world is known. Each robot measures its range and bearing from each landmark to localize itself. State of a robot: 22

Landmark-based Localization Measurement at time t, is a variable-sized vector, depending on the landmarks that are visible at time t. Each measurement is a 2D vector, containing range and bearing from the robot to a landmark. 23

Estimation Sequence (1) 24

Estimation Sequence (2) 25

Comparison to true trajectory 26

Introducing: The SLAM problem So far we have assumed measurement model is only a function of the state and observation: this assumes knowledge of the world map (example 3 doors environment) Robots often have the need to navigate without a pre-known map. This requires simultaneously building a map and localizing the robot: the SLAM problem (example, jointly determine where the doors and robot are located)

Given: The robot s controls Observations of nearby features Estimate: First method: EKF-SLAM A robot is exploring an unknown, static environment. Map of features Path of the robot 28

Structure of Landmark-based SLAM 29

Why is SLAM a hard problem? SLAM: robot path and map are both unknown Robot path error correlates errors in the map 30

Why is SLAM a hard problem? Robot pose uncertainty In the real world, the mapping between observations and landmarks is unknown Picking wrong data associations can have catastrophic consequences Pose error correlates data associations 31

32 2 2 2 2 2 2 2 1 2 1 2 2 2 1 2 2 2 1 2 1 1 1 1 1 2 1 2 1 2 1, ), ( N N N N N N N N N N N l l l l l l yl xl l l l l l l yl xl l l l l l l yl xl l l l y x yl yl yl y y xy xl xl xl x xy x N t t l l l y x m x Bel Map with N landmarks: (3+2N)-dimensional Gaussian Can handle hundreds of dimensions EKF-SLAM

EKF-SLAM Map Covariance matrix 33

EKF-SLAM Map Covariance matrix 34

EKF-SLAM Map Covariance matrix 35

Properties of EKF-SLAM (Linear Case) [Dissanayake et al., 2001] Theorem: The determinant of any sub-matrix of the map covariance matrix decreases monotonically as successive observations are made. Theorem: In the limit the landmark estimates become fully correlated 36

Example 2: Particle Filter SLAM (the dumb way) Augment each particle s state estimate with full map information Each particle contains a full copy of the map What are the practical implications of this?

Example 3: Rao-Blackwellized Particle Filters (the modern way)

Visual SLAM Localization and mapping with measurements usually coming from tracking image features: keypoints/corners edges image intensity patches Can use one or more cameras Visual Odometry Real-time localization with measurements usually coming from tracking image features: keypoints/corners edges image intensity patches Can use one or more cameras

Structure from Motion How can we estimate both 3D point positions and the relative camera transformations? Sometimes also called bundle adjustment Q: Why is it different than SLAM? A: SLAM potentially includes - loop closure - dynamics constraints - velocities, accelerations

Basic underlying component in many of these problems: keypoint detection and matching across images

Ideally, we want the descriptor to be invariant (i.e. little to no change) when there are - viewpoint changes (small rotation or translation of the camera) - scale-changes - illumination changes

Corner detectors Harris FAST Laplacian of Gaussian detector SUSAN Forstner

Scale-space representation Feature detection: search for corners /keypoints across many scales, and return a list of (x, y, scale) keypoints

Many other local descriptors ORB BRIEF FREAK RootSIFT-PCA

Feature matching

Structure from Motion How can we estimate both 3D point positions and the relative camera transformations? Sometimes also called bundle adjustment Q: Why is it different than SLAM?

Structure from Motion How can we estimate both 3D point positions and the relative camera transformations? Sometimes also called bundle adjustment Q: Why is it different than SLAM? A: SLAM potentially includes - loop closure - dynamics constraints - velocities, accelerations

Loop Closure in Visual SLAM ORB-SLAM, Mur-Artal, Tardos, Montiel, Galvez-Lopez

Bundler (bundle adjustment/structure from motion)

Structure from Motion as Least Squares Indicates the frame of the k-th camera.

Structure from Motion as Least Squares Actual pixel measurement of 3D point from camera k Expected pixel projection of 3D point onto camera k

Structure from Motion as Least Squares