Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Similar documents
Probabilistic Robotics

7630 Autonomous Robotics Probabilistic Localisation

Introduction to Mobile Robotics

Anno accademico 2006/2007. Davide Migliore

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

2016 Possible Examination Questions. Robotics CSCE 574

Sequential Importance Resampling (SIR) Particle Filter

Probabilistic Robotics SLAM

Probabilistic Robotics SLAM

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

CSE-473. A Gentle Introduction to Particle Filters

Data Fusion using Kalman Filter. Ioannis Rekleitis

Notes on Kalman Filtering

CSE-571 Robotics. Sample-based Localization (sonar) Motivation. Bayes Filter Implementations. Particle filters. Density Approximation

Using the Kalman filter Extended Kalman filter

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering

Tracking. Many slides adapted from Kristen Grauman, Deva Ramanan

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS

Fundamental Problems In Robotics

Overview. COMP14112: Artificial Intelligence Fundamentals. Lecture 0 Very Brief Overview. Structure of this course

Robot Motion Model EKF based Localization EKF SLAM Graph SLAM

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Tracking. Many slides adapted from Kristen Grauman, Deva Ramanan

Recursive Bayes Filtering Advanced AI

Localization. Mobile robot localization is the problem of determining the pose of a robot relative to a given map of the environment.

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Probabilistic Robotics The Sparse Extended Information Filter

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

OBJECTIVES OF TIME SERIES ANALYSIS

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

CS 4495 Computer Vision Tracking 1- Kalman,Gaussian

Problemas das Aulas Práticas

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Augmented Reality II - Kalman Filters - Gudrun Klinker May 25, 2004

Estimation of Poses with Particle Filters

Linear Gaussian State Space Models

An introduction to the theory of SDDP algorithm

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

Applications in Industry (Extended) Kalman Filter. Week Date Lecture Title

Object tracking: Using HMMs to estimate the geographical location of fish

Lecture 4 Kinetics of a particle Part 3: Impulse and Momentum

Financial Econometrics Kalman Filter: some applications to Finance University of Evry - Master 2

AUTONOMOUS SYSTEMS. Probabilistic Robotics Basics Kalman Filters Particle Filters. Sebastian Thrun

20. Applications of the Genetic-Drift Model

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

From Particles to Rigid Bodies

Chapter 2. Models, Censoring, and Likelihood for Failure-Time Data

Probabilistic Fundamentals in Robotics

Section 4.4 Logarithmic Properties

Temporal probability models

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

Linear Response Theory: The connection between QFT and experiments

KINEMATICS IN ONE DIMENSION

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006

EKF SLAM vs. FastSLAM A Comparison

Simultaneous Localisation and Mapping. IAR Lecture 10 Barbara Webb

Review - Quiz # 1. 1 g(y) dy = f(x) dx. y x. = u, so that y = xu and dy. dx (Sometimes you may want to use the substitution x y

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Lecture Notes 2. The Hilbert Space Approach to Time Series

Some Basic Information about M-S-D Systems

A Bayesian Approach to Spectral Analysis

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

1 Review of Zero-Sum Games

Vehicle Arrival Models : Headway

Section 4.4 Logarithmic Properties

References are appeared in the last slide. Last update: (1393/08/19)

Localization and Map Making

Introduction to Mobile Robotics Summary

Stochastic Signals and Systems

Week 1 Lecture 2 Problems 2, 5. What if something oscillates with no obvious spring? What is ω? (problem set problem)

10. State Space Methods

Tracking Adversarial Targets

GMM - Generalized Method of Moments

Module 4: Time Response of discrete time systems Lecture Note 2

Block Diagram of a DCS in 411

Solutions to the Exam Digital Communications I given on the 11th of June = 111 and g 2. c 2

A Shooting Method for A Node Generation Algorithm

Spring Ammar Abu-Hudrouss Islamic University Gaza

Chapter 2. First Order Scalar Equations

Chapter 4. Truncation Errors

The fundamental mass balance equation is ( 1 ) where: I = inputs P = production O = outputs L = losses A = accumulation

Tracking. Announcements

Temporal probability models. Chapter 15, Sections 1 5 1

מקורות לחומר בשיעור ספר הלימוד: Forsyth & Ponce מאמרים שונים חומר באינטרנט! פרק פרק 18

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach

Institute for Mathematical Methods in Economics. University of Technology Vienna. Singapore, May Manfred Deistler

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

2 int T. is the Fourier transform of f(t) which is the inverse Fourier transform of f. i t e

Data Assimilation. Alan O Neill National Centre for Earth Observation & University of Reading

Presentation Overview

Testing the Random Walk Model. i.i.d. ( ) r

Online Convex Optimization Example And Follow-The-Leader

Transcription:

Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II

ACT and SEE For all do, (predicion updae / ACT), (measuremen updae / SEE) endfor Reurn Localizaion II

3 Map Represenaion Coninuous Line-Based a) Archiecure map b) Represenaion wih se of finie or infinie lines Localizaion II

4 Map Represenaion Eac cell decomposiion Eac cell decomposiion - Polygons Localizaion II

5 Map Represenaion Approimae cell decomposiion Fied cell decomposiion Narrow passages disappear Localizaion II

6 Map Represenaion Adapive cell decomposiion Eercise: how do we implemen an adapive cell decomposiion algorihm? Localizaion II

8 Map Represenaion Topological map A opological map represens he environmen as a graph wih nodes and edges. Nodes correspond o spaces Edge correspond o physical connecions beween nodes Topological maps lack scale and disances, bu opological relaionships (e.g., lef, righ, ec.) are manained node (locaion) edge (conneciviy) Localizaion II

9 Map Represenaion Topological map London underground map Localizaion II

13 Probabilisic Map Based Localizaion Probabilisic Map Based Localizaion Localizaion II

14 Soluion o he probabilisic localizaion problem A probabilisic approach o he mobile robo localizaion problem is a mehod able o compue he probabiliy disribuion of he robo configuraion during each Acion (ACT) and Percepion (SEE) sep. The ingrediens are: 1. The iniial probabiliy disribuion p ( ) 0. The saisical error model of he propriocepive sensors (e.g. wheel encoders) 3. The saisical error model of he eerocepive sensors (e.g. laser, sonar, camera) 4. Map of he environmen (If he map is no known a priori hen he robo needs o build a map of he environmen and hen localize in i. This is called SLAM, Simulaneous Localizaion And Mapping) Localizaion II

18 Illusraion of probabilisic bap based localizaion Iniial probabiliy disribuion p ( ) 0 p z ) ( Percepion updae bel( ) p( z ) bel( ) Acion updae p z ) ( Percepion updae bel( ) p( z ) bel( ) Localizaion II

19 Illusraion of probabilisic bap based localizaion Iniial probabiliy disribuion p ( ) 0 p z ) ( Percepion updae bel( ) p( z ) bel( ) Acion updae p z ) ( Percepion updae bel( ) p( z ) bel( ) Localizaion II

0 Illusraion of probabilisic bap based localizaion Iniial probabiliy disribuion p ( ) 0 p z ) ( Percepion updae bel( ) p( z ) bel( ) Acion updae p z ) ( Percepion updae bel( ) p( z ) bel( ) Localizaion II

1 Illusraion of probabilisic bap based localizaion Iniial probabiliy disribuion p ( ) 0 p z ) ( Percepion updae bel( ) p( z ) bel( ) Acion updae p z ) ( Percepion updae bel( ) p( z ) bel( ) Localizaion II

Markov Localizaion Probabilisic Map Based Localizaion: Markov Localizaion Localizaion II

3 Markov localizaion Markov localizaion uses a grid space represenaion of he robo configuraion. For all do, (predicion updae), (measuremen updae) endfor Reurn Localizaion II

4 Markov localizaion Le us discreize he configuraion space ino 10 cells Suppose ha he robo s iniial belief is a uniform disribuion from 0 o 3. Observe ha all he elemens were normalized so ha heir sum is 1. Localizaion II

5 Markov localizaion Iniial belief disribuion Acion phase: Le us assume ha he robo moves forward wih he following saisical model This means ha we have 50% probabiliy ha he robo moved or 3 cells forward. Considering wha he probabiliy was before moving, wha will he probabiliy be afer he moion? Localizaion II

6 Markov localizaion Acion updae The soluion is given by he convoluion (cross correlaion) of he wo disribuions,, * Localizaion II

8 Markov localizaion Percepion updae Le us now assume ha he robo uses is onboard range finder and measures he disance from he origin. Assume ha he saisical error model of he sensors is: This plo ells us ha he disance of he robo from he origin can be equally 5 or 6 unis. Wha will he final robo belief be afer his measuremen? The answer is again given by he Bayes rule:, Localizaion II

9 Markov Localizaion Case Sudy Grid Map Eample : Museum Laser scan 1 Couresy of W. Burgard Localizaion II

30 Markov Localizaion Case Sudy Grid Map Eample : Museum Laser scan Couresy of W. Burgard Localizaion II

31 Markov Localizaion Case Sudy Grid Map Eample : Museum Laser scan 3 Couresy of W. Burgard Localizaion II

3 Markov Localizaion Case Sudy Grid Map Eample : Museum Laser scan 13 Couresy of W. Burgard Localizaion II

33 Markov Localizaion Case Sudy Grid Map Eample : Museum Laser scan 1 Couresy of W. Burgard Localizaion II

34 Kalman filer Localizaion Probabilisic Map Based Localizaion: Kalman Filer Localizaion Localizaion II

35 Kalman filer Localizaion Assumpions and properies Assumpions Linear or linearizable sysem Robo belief, moion model, and measuremen model are affeced by whie Gaussian noise Oucome Guaraneed o be opimal Only μ and Σ are updaed during he acion and percepion updaes Localizaion II

37 Kalman Filer Localizaion Illusraion Acion (ACT) Percepion (SEE) Localizaion II

38 Inroducion o Kalman filer heory A Gaussian disribuion is repsened only by is firs and second momens: mean μ and variance σ and is indicaed by N(μ,σ ) When he robo configuraion is a vecor, he disribuion is a mulivariae Gaussian represened by a mean vecor μ and a covariance mari Σ Localizaion II

40 Inroducion o Kalman filer heory Applying he heorem of oal probabiliy Le 1, be wo random variables which are Independen and Normally disribued Le y be a funcion of 1, Wha will he disribuion of y be? Localizaion II

41 Inroducion o Kalman filer heory Applying he heorem of oal probabiliy The answer is simple if f is linear If 1, are independen and normal, he oupu is also a Gaussian wih If 1, are vecors wih covariances Σ 1, Σ respecively, hen Localizaion II

47 Inroducion o Kalman filer heory Applying he Bayes rule bel( ) p( z ) bel( ) Here, we wish o demonsrae ha he produc of wo Gaussian funcions is sill a Gaussian Le now q denoe he posiion of he robo. Le p 1 (q) be he robo belief resuling from he Acion updae (i.e., ) p( z ) Le p (q) be he robo belief from he observaion (i.e., ) bel ( ) We wish o show ha p 1 and p are Gaussian funcions, heir produc is also a Gaussian Localizaion II

48 Inroducion o Kalman filer heory Applying he Bayes rule By formalizing his, we wan o show ha if we have hen heir produc is also Gaussian p1( q) p( q) N( q, ) Addiionally, we wan o find an epression of he mean value and variance of he new Gaussian as a funcion of he mean values and variances of he inpu variables Localizaion II

49 Inroducion o Kalman filer heory Applying he Bayes rule p1( q) p( q) From he produc of wo Gaussians, we obain p ( ) p ( q) 1 q Localizaion II

50 Inroducion o Kalman filer heory Applying he Bayes rule p1( q) p( q) From he produc of wo Gaussians, we obain p ( ) p ( q) 1 q As we can see, he argumen of his eponenial is quadraic in q, hence is a Gaussian. We now need o deermine is mean value and variance ha allow us o rewrie his eponenial in he form Localizaion II

51 Inroducion o Kalman filer heory Applying he Bayes rule By rearranging he eponenial, we ge Localizaion II

5 Inroducion o Kalman filer heory Applying he Bayes rule Where he mean value q can be wrien as Localizaion II

53 Inroducion o Kalman filer heory Applying he Bayes rule Where he mean value q can be wrien as Localizaion II

54 Inroducion o Kalman filer heory Applying he Bayes rule Where he mean value q can be wrien as And he variance can be wrien as Localizaion II

55 Inroducion o Kalman filer heory Applying he Bayes rule Where he mean value q can be wrien as And he variance can be wrien as Localizaion II

56 Inroducion o Kalman filer heory Applying he Bayes rule By rearranging he erms, he epressions of he mean value and variance can also be wrien as Kalman gain The resuling variance is smaller han he inpu variances. Thus, he uncerainy of he posiion esimae has shrunk as a resul of he observaion Even poor measuremens will only increase he precision of he esimae. This is a resul ha we epec based on informaion heory. Localizaion II

Inroducion o Kalman filer heory Equaions applied o mobile robos One-Dimenional Case N-Dimensional Case Localizaion II 59 ), ( 1 u f 1 1 u u f f ) ( z z 4 z Acion Updae (or Predicion Updae) Percepion Updae (or Measuremen Updae) Acion Updae (or Predicion Updae) Percepion Updae (or Measuremen Updae) ), ( 1 u f T u u T F F Q F F P P 1 ) ( ) ( 1 o R P P P R P P P P ) ( 1 NB: The new mean value is closer o he one of he wo esimaes ha has smaller uncerainy The new uncerainy is smaller han he wo iniial uncerainies

Inroducion o Kalman filer heory Equaions applied o mobile robos One-Dimenional Case N-Dimensional Case Localizaion II 60 ), ( 1 u f 1 1 u u f f ) ( z z 4 z Acion Updae (or Predicion Updae) Percepion Updae (or Measuremen Updae) Acion Updae (or Predicion Updae) Percepion Updae (or Measuremen Updae) ), ( 1 u f T u u T F F Q F F P P 1 NB: The new mean value is closer o he one of he wo esimaes ha has smaller uncerainy The new uncerainy is smaller han he wo iniial uncerainies K T K K P P R) ( -1 P P K ) ( z P R

5 88 Kalman Filer Localizaion Markov versus Kalman localizaion Markov PROS localizaion saring from any unknown posiion recovers from ambiguous siuaion Kalman PROS Tracks he robo and is inherenly very precise and efficien CONS However, o updae he probabiliy of all posiions wihin he whole sae space a any ime requires a discree represenaion of he space (grid). The required memory and calculaion power can hus become very imporan if a fine grid is used. CONS If he uncerainy of he robo becomes o large (e.g. collision wih an objec) he Kalman filer will fail and he posiion is definiively los Localizaion II