Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Similar documents
Mobile Robot Localization

Mobile Robot Localization

Introduction to Mobile Robotics Probabilistic Motion Models

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

CSEP 573: Artificial Intelligence

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Approximate Inference

E190Q Lecture 10 Autonomous Robot Navigation

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

1 Kalman Filter Introduction

CS 343: Artificial Intelligence

2D Image Processing. Bayes filter implementation: Kalman filter

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

2D Image Processing (Extended) Kalman and particle filter

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

CS 5522: Artificial Intelligence II

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

CS 5522: Artificial Intelligence II

CS491/691: Introduction to Aerial Robotics

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

CSE 473: Artificial Intelligence

Hidden Markov Models,99,100! Markov, here I come!

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

2D Image Processing. Bayes filter implementation: Kalman filter

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

CS 188: Artificial Intelligence Spring 2009

Hidden Markov models 1

Particle lter for mobile robot tracking and localisation

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

CSE 473: Artificial Intelligence

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

CS 188: Artificial Intelligence Spring Announcements

Introduction to Mobile Robotics Probabilistic Robotics

Probabilistic Robotics

Lecture 10. Rigid Body Transformation & C-Space Obstacles. CS 460/560 Introduction to Computational Robotics Fall 2017, Rutgers University

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Local Probabilistic Models: Continuous Variable CPDs

Dynamic models 1 Kalman filters, linearization,

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Our Status. We re done with Part I Search and Planning!

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Robot Localization and Kalman Filters

CS 188: Artificial Intelligence Spring Announcements

PROBABILISTIC REASONING OVER TIME

We provide two sections from the book (in preparation) Intelligent and Autonomous Road Vehicles, by Ozguner, Acarman and Redmill.

Autonomous Mobile Robot Design

Mobile Robots Localization

Our Status in CSE 5522

Partially Observable Markov Decision Processes (POMDPs)

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

MATH MW Elementary Probability Course Notes Part I: Models and Counting

Vehicle Dynamic Control Allocation for Path Following Moritz Gaiser

Probabilistic Robotics

CSE 473: Artificial Intelligence Spring 2014

Markov Models and Hidden Markov Models

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

What you need to know about Kalman Filters

Lego NXT: Navigation and localization using infrared distance sensors and Extended Kalman Filter. Miguel Pinto, A. Paulo Moreira, Aníbal Matos

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Multi-Robotic Systems

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

CS 343: Artificial Intelligence

Statistical Tools and Techniques for Solar Astronomers

been developed to calibrate for systematic errors of a two wheel robot. This method has been used by other authors (Chong, 1997). Goel, Roumeliotis an

Discrete Probability and State Estimation

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Autonomous Mobile Robot Design

CS 188: Artificial Intelligence. Our Status in CS188

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Markov Chains and Hidden Markov Models

IROS 2009 Tutorial: Filtering and Planning in I-Spaces PART 3: VIRTUAL SENSORS

Bayesian Models in Machine Learning

Choosing priors Class 15, Jeremy Orloff and Jonathan Bloom

COMP90051 Statistical Machine Learning

Achilles: Now I know how powerful computers are going to become!

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Sheet 4 solutions. May 15, 2017

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

PROBABILITY AND INFERENCE

From Bayes to Extended Kalman Filter

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Transcription:

Localization Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Localization General robotic task Where am I? Techniques generalize to many estimation tasks System parameter estimation Noisy signal smoothing Weather system modeling

Localization Problem Definition Goal: Estimate state given a history of observations and actions State: Information sufficient to predict observations Observation: Information derived from state Actions: Inputs that affect the state State is important, but what is it?

Any parameterization to describe our system Examples Car / Planar Robot Minimal (x, y, ϴ) Could be (x, y, ϴ, temperature, time of day, Google stock, favorite color, etc.) Consider slot car Only 1D problem! State

Things we can do Can be discrete set: Action / Control Input Pacman left, right, up, down Or continuous inputs: Helicopter throttles

Observation Information derive from a sensor Examples Time (from a clock) Temperature (thermoater) Encoder Good vs. Bad (really strong vs. weak correlations) ie. Temperature at a city for estimating a weather system

Absolute vs. Relative Absolute Localization Defines state relative to a common (usually fixed) reference frame Useful for coordination, navigation, etc. Relative Localization Defines state relative to a local (non-shared) reference frame Useful for exploration, displacement estimation

Why not GPS/Vicon? Signal-denied environments Insufficient performance Accuracy Bandwidth (update rate) Bias Receiver size

Why not odometry? Uncertainty and error accumulation! Unmodeled environmental factors Integration errors Modeling errors Bottom line: Uncertainty is a part of life. We have to deal with it!

Localization Problem Definition Goal: Estimate state given a history of noisy observations and noisy actions State: Information sufficient to predict observations Observation: Information derived from state Actions: Inputs that affect the state

Localization: Estimate State Move: Motion Model Observe: Observation Model Initial State Estimate Motion Model Observation Model

Example Moving only in one dimension Known map of flower garden Simple flower detector Beeps when you are in front of a flower Gaussian distribution of a flower given a beep

Initially, no idea where we are Example

First observation update Example

New belief about location Example

Robot moves, motion update Example

Observation update Example

Final Belief Example

Differential Drive Motion Model State: (x, y, ϴ) SE2 pose Actions: Drive forward, angle --OR-- Drive wheel 1, drive wheel 2 Measurements: Wheel rotation ticks 2-wheeled Lego Robot

Differential Drive Example Run the same trajectory many times They re all different! Why? Trial 3 Trial n Trial 1 Trial 2

Differential Drive Example Run a 10 cm straight trajectory many times Look at the results as a distribution # of occurrences 0 10 Displacement d 20

Probability Differential Dive Example Run a 10 cm straight trajectory many times Look at the results as a distribution 0.2 0.15 η ~ N(μ, σ 2 ) 0.1 0.05 x t+1 = f x t + η 0 0 2 4 6 8 10 12 14 16 18 20 Displacement, d

Probability Differential Drive Example Subtract out model contribution to determine noise component x t+1 = f x t, u t + η t 0.2 0.15 0.1 0.05 0-10 -8-6 -4-2 0 2 4 6 8 10 Error,

Odometry Example Differential drive robot experiences uncertainty in distance traveled and heading Produces a banana distribution Hard to model!

Differential Drive Sensor Model Recall our odometry equations: = = observations effect on state initial state

Motion Differential Drive Sensor Model Recall our odometry equations: = = observations effect on state initial state

A Brief Overview of PROBABILITY

Discrete Probability Distribution Let X be the value of a die roll X is unknown (a Random Variable) P(X = v) means Probability that we sample X and it equals v v P(X=v) 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6

Discrete Probability Distribution Let X be the value of a die roll X is unknown (a Random Variable) P(X = v) means Probability that we sample X and it equals v v P(X=v) 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6 Sums to 1

Discrete Probability Distribution This time, X is a weighted die This is a different distribution for the same variable v P(X=v) 1 0.1 2 0.1 3 0.1 4 0.2 5 0.25 6 0.25

Discrete Probability Distribution Consider a sum of dice And = x Or = + v P(X1=v) 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6 v P(X2=v) 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6 v P(X1 + X2 = v) 2 P(X1 = 1) * P(X2 = 1) 3 P(X1 = 1) * P(X2 = 2) + P(X1 = 2) * P(X2 = 1) 4 P(X1 = 1) * P(X2 = 3) + 5 6 7 8 9 10 11 12

Discrete Probability Distribution Can we separate the probabilities? P(X2 = hot X1 = summer) is high; P(X2 = hot X1 = winter) is low P(X2 = v X1 = 1) means Probability that roll 2 is v if roll 1 is 1 Independent variables v P(X2=v X1 = 1) 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6

Discrete Probability Distribution Using conditional probabilities allows us to easily combine: P x, y = P x y P(y) x y P y x = P x y P(y) P(x) Bayes theorem

At time step 0: Putting it Together 1. Robot takes an action x 1 = f x 0, u 0 + η 0 2. Robot makes an observation z 1, h x 1 Distribution of measurement at time step 1:

At time step 0: Putting it Together 1. Robot takes an action x 1 = f x 0, u 0 + η 0 2. Robot makes an observation z 1, h x 1 Distribution of measurement at time step 1: p x 1 u 0, z 1 ) =

At time step 0: Putting it Together 1. Robot takes an action x 1 = f x 0, u 0 + η 0 2. Robot makes an observation z 1, h x 1 Distribution of position at time step 1: p x 1 u 0, z 1 ) p z 1 x 1 ) p x 1 x 0, u 0 ) p(x 0 ) Observation Model Transition Model Prior

At time step 1: Putting it Together 1. Robot takes an action x 2 = f x 1, u 1 + η 1 2. Robot makes an observation z 2, h x 2 Distribution of position at time step 1:

At time step 1: Putting it Together 1. Robot takes an action x 2 = f x 1, u 1 + η 1 2. Robot makes an observation z 2, h x 2 Distribution of position at time step 1: p x 2 u 0, z 1, u 1, z 2 ) =

At time step 1: Putting it Together 1. Robot takes an action x 2 = f x 1, u 1 + η 1 2. Robot makes an observation z 2, h x 2 Distribution of position at time step 1: p x 2 u 0, z 1, u 1, z 2 ) p z 2 x 2 ) p x 2 x 1, u 1 ) p z 1 x 1 ) p x 1 x 0, u 0 ) p(x 0 ) New trans. & obs. model Previous result

At time step 1: Putting it Together 1. Robot takes an action x 2 = f x 1, u 1 + η 1 2. Robot makes an observation z 2, h x 2 Distribution of position at time step 1: p x 2 u 0, z 1, u 1, z 2 ) p z 2 x 2 ) p x 2 x 1, u 1 ) p(x 1 u 0, z 1 ) New trans. & obs. model Previous result

At time step t: Recursive Inference 1. Robot takes an action x t = f x t, u t 2. Robot makes an observation z t, h x t + η t Distribution of position at time step 1: p x t u 0,, u t, z 1,, z t ) p z t x t ) p x t x t 1, u t 1 )p x t 1 u 0,, u t 1, z 1,, z t 1 ) New trans. & obs. model Previous result

Filtering Algorithm for Localization For each possible location: Apply motion model For each possible location: Apply observation model Loop forever

Filtering Algorithm for Localization For each possible location: Apply motion model For each possible location: Apply observation model Loop forever Too slow! Let s use discrete hypotheses instead

Sampling From the Motion Model

Example Revisited Same flower-happy robot Same map This time, track samples (particles)

Example Revisited Initially, no idea where we are The particles. Height represents the weight (probability or confidence) of a given particle

First observation update Example

First observation update Evaluate model at particles Example

New belief Example

Example New belief Too few Too many

Example Resample particles Higher weight particles get more particles allocated near them during the resample

Example Reset weights Density of particles is related to weight of particles previously

Example Robot moves, motion update Particles spread out

Observation update Example

Observation update Example

Estimate is best particle Example

Example

Discrete Bayes Filter

Discrete Bayes Filter