Introduction to Mobile Robotics Bayes Filter Kalman Filter

Similar documents
Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Autonomous Navigation for Flying Robots

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Robotics 2 Least Squares Estimation. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Maren Bennewitz, Wolfram Burgard

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Introduction to Mobile Robotics Probabilistic Robotics

CS491/691: Introduction to Aerial Robotics

Advanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Autonomous Mobile Robot Design

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Recursive Bayes Filtering

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Robotics 2 Target Tracking. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Introduction to Mobile Robotics

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Probabilistic Robotics

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

1 Kalman Filter Introduction

Mathematical Formulation of Our Example

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

2D Image Processing. Bayes filter implementation: Kalman filter

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Probabilistic Robotics

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Autonomous Mobile Robot Design

COS Lecture 16 Autonomous Robot Navigation

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Robot Localization and Kalman Filters

Mobile Robot Localization

Probabilistic Graphical Models

2D Image Processing. Bayes filter implementation: Kalman filter

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Sensor Tasking and Control

Dynamic System Identification using HDMR-Bayesian Technique

From Bayes to Extended Kalman Filter

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

X t = a t + r t, (7.1)

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

2D Image Processing (Extended) Kalman and particle filter

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander

Probabilistic Fundamentals in Robotics

Probabilistic Robotics

Introduction to Unscented Kalman Filter

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Robot Mapping. Least Squares. Cyrill Stachniss

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Local Positioning with Parallelepiped Moving Grid

Mobile Robot Localization

Ensemble Kalman Filter

CS 532: 3D Computer Vision 6 th Set of Notes

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

E190Q Lecture 11 Autonomous Robot Navigation

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Sensor Fusion: Particle Filter

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Towards Uncertainty-Aware Path Planning On Road Networks Using Augmented-MDPs. Lorenzo Nardi and Cyrill Stachniss

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

Introduction to Probabilistic Graphical Models: Exercises

Forecasting and data assimilation

Implicit sampling for particle filters. Alexandre Chorin, Mathias Morzfeld, Xuemin Tu, Ethan Atkins

Methods of Data Assimilation and Comparisons for Lagrangian Data

Conditions for successful data assimilation

Gaussian Process Approximations of Stochastic Differential Equations

Fundamentals of Data Assimila1on

AUTOMOTIVE ENVIRONMENT SENSORS

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

The Kalman Filter ImPr Talk

Lecture 4: Extended Kalman filter and Statistically Linearized Filter

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Stochastic Processes for Physicists

Probabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Mixture Filter in Hybrid Navigation

Simultaneous Localization and Mapping

1 Bayesian Linear Regression (BLR)

Gaussians. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Bayesian Regression Linear and Logistic Regression

STA 4273H: Statistical Machine Learning

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Smoothers: Types and Benchmarks

CS281A/Stat241A Lecture 17

Expectation Propagation in Dynamical Systems

Logistic Regression. Seungjin Choi

Transcription:

Introduction to Mobile Robotics Bayes Filter Kalman Filter Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1

Bayes Filter Reminder 1. Algorithm Bayes_filter( Bel(x),d ): 2. η=0 3. If d is a perceptual data item z then 4. For all x do 5. 6. 7. For all x do 8. 9. Else if d is an action data item u then 10. For all x do 11. 12. Return Bel (x) 2

Kalman Filter Bayes filter with Gaussians Developed in the late 1950's Most relevant Bayes filter variant in practice Applications range from economics, wheather forecasting, satellite navigation to robotics and many more. The Kalman filter "algorithm" is a bunch of matrix multiplications! 3

Gaussians µ Univariate -σ σ µ Multivariate

Gaussians 1D 3D 2D Video

Properties of Gaussians

Multivariate Gaussians (where division " " denotes matrix inversion) We stay Gaussian as long as we start with Gaussians and perform only linear transformations

Discrete Kalman Filter Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation with a measurement 8

Components of a Kalman Filter Matrix (nxn) that describes how the state evolves from t to t-1 without controls or noise. Matrix (nxl) that describes how the control u t changes the state from t to t-1. Matrix (kxn) that describes how to map the state x t to an observation z t. Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance Q t and R t respectively. 9

Bayes Filter Reminder Prediction Correction

Kalman Filter Updates in 1D 11

Kalman Filter Updates in 1D ) = µ t = µ t + K t (z t C t µ t ) Σ t = (I K t C t )Σt with K t = ΣtC t T (C t ΣtC t T + R t ) 1 How to get the blue one? > Kalman correction step 12

Kalman Filter Updates in 1D How to get the magenta one? > State prediction step ) = µ t = A t µ t 1 + B t u t Σt = A t Σ t 1 A T t + Q t

Kalman Filter Updates

Linear Gaussian Systems: Initialization Initial belief is normally distributed:

Linear Gaussian Systems: Dynamics Dynamics are linear function of state and control plus additive noise: p(x t u t,x t 1 ) = N( x t ;A t x t 1 + B t u t,q ) t ) = p(x t u t,x t 1 ) 1 ) dx t 1 ~ N( x t ;A t x t 1 + B t u t,q ) t ~ N( x t 1 ;µ t 1,Σ ) t 1

Linear Gaussian Systems: Dynamics ) = p(x t u t, x t 1 ) 1 ) dx t 1 ~ N( x t ;A t x t 1 + B t u t,q t ) ~ N( x t 1 ;µ t 1,Σ t 1 ) ) = η exp 1 2 (x t A t x t 1 B t u t ) T Q 1 t (x t A t x t 1 B t u t ) exp 1 2 (x µ t 1 t 1 )T Σ 1 t 1 (x t 1 µ t 1 ) dx t 1 ) = µ t = A t µ t 1 + B t u t Σt = A t Σ t 1 A T t + Q t

Linear Gaussian Systems: Observations Observations are linear function of state plus additive noise: p(z t x t ) = N( z t ;C t x t,r ) t ) = η p(z t x t ) ) ~ N( z t ;C t x t,r ) t ~ N( x t ;µ ) t,σt

Linear Gaussian Systems: Observations ) = η p(z t x t ) ) ~ N( z t ;C t x t,r t ) ~ N( x t ;µ ) t,σt ) = η exp 1 2 (z C x t t t )T R 1 t (z t C t x t ) exp 1 2 (x µ t t )T Σ 1 t (x t µ t ) ) = µ t = µ t + K t (z t C t µ t ) Σ t = (I K t C t )Σt with K t = ΣtC t T (C t ΣtC t T + R t ) 1

Kalman Filter Algorithm 1. Algorithm Kalman_filter( µ t-1, Σ t-1, u t, z t ): 2. Prediction: 3. µ t = A t µ t 1 + B t u t 4. Σt = A t Σ t 1 A T t + Q t 5. Correction: 6. K t = ΣtC T t (C t ΣtC T t + R t ) 1 7. µ t = µ t + K t (z t C t µ t ) 8. Σ t = (I K t C t )Σt 9. Return µ t, Σ t

Kalman Filter Algorithm

Kalman Filter Algorithm Prediction Observation Matching Correction

The Prediction-Correction-Cycle Prediction ) = µ t = a t µ t 1 + b t u t σ 2 t = a t2 σ 2 2 t + σ act,t ) = µ t = A t µ t 1 + B t u t Σt = A t Σ t 1 A T t + Q t 23

The Prediction-Correction-Cycle ) = µ t = µ t + K t (z t µ t ), K σ 2 2 t = (1 K t )σ t = t ) = µ t = µ t + K t (z t C t µ t ),K Σ t = (I K t C t = ΣtC T t (C t ΣtC T t + R t ) 1 t )Σt σ t 2 σ 2 2 t + σ obs,t Correction 24

The Prediction-Correction-Cycle Prediction ) = µ t = µ t + K t (z t µ t ), K σ 2 2 t = (1 K t )σ t = t ) = µ t = µ t + K t (z t C t µ t ),K Σ t = (I K t C t = ΣtC T t (C t ΣtC T t + R t ) 1 t )Σt σ t 2 σ 2 2 t + σ obs,t ) = µ t = a t µ t 1 + b t u t σ 2 t = a t2 σ 2 2 t + σ act,t ) = µ t = A t µ t 1 + B t u t Σt = A t Σ t 1 A T t + Q t Correction 25

Kalman Filter Summary Highly efficient: Polynomial in the measurement dimensionality k and state dimensionality n: O(k 2.376 + n 2 ) Optimal for linear Gaussian systems! Most robotics systems are nonlinear!

Nonlinear Dynamic Systems Most realistic robotic problems involve nonlinear functions

Linearity Assumption Revisited

Non-linear Function

EKF Linearization (1)

EKF Linearization (2)

EKF Linearization (3)

EKF Linearization: First Order Taylor Series Expansion Prediction: Correction:

EKF Algorithm 1. Extended_Kalman_filter( µ t-1, Σ t-1, u t, z t ): 2. Prediction: 3. 4. Σt = G t Σ t 1 G T t + Q t 5. Correction: 6. K t = ΣtH T t (H t ΣtH T t + R t ) 1 7. 8. Σt = A t Σ t 1 A t T + Q t K t = ΣtC t T (C t ΣtC t T + R t ) 1 9. Return µ t, Σ t