Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Similar documents
2D Image Processing (Extended) Kalman and particle filter

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Lecture : Probabilistic Machine Learning

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robot Localisation. Henrik I. Christensen. January 12, 2007

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Sensor Fusion: Particle Filter

Results: MCMC Dancers, q=10, n=500

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

CS491/691: Introduction to Aerial Robotics

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

CSE 473: Artificial Intelligence

Dynamic Multipath Estimation by Sequential Monte Carlo Methods

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Sampling Based Filters

CSEP 573: Artificial Intelligence

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Particle lter for mobile robot tracking and localisation

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Parameter Estimation. Industrial AI Lab.

STA 4273H: Statistical Machine Learning

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

From Bayes to Extended Kalman Filter

Introduction to Mobile Robotics Probabilistic Robotics

Answers and expectations

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Introduction to Mobile Robotics Probabilistic Motion Models

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

STA 4273H: Sta-s-cal Machine Learning

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

13 : Variational Inference: Loopy Belief Propagation and Mean Field

Probabilistic Graphical Models

Mixtures of Gaussians. Sargur Srihari

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Grundlagen der Künstlichen Intelligenz

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

CS 188: Artificial Intelligence Spring Announcements

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Bayesian Regression Linear and Logistic Regression

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Introduction to Machine Learning

Overview. Probabilistic Interpretation of Linear Regression Maximum Likelihood Estimation Bayesian Estimation MAP Estimation

Markov Chain Monte Carlo

Bayesian Methods / G.D. Hager S. Leonard

Autonomous Mobile Robot Design

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

AUTOMOTIVE ENVIRONMENT SENSORS

Introduction to Artificial Intelligence (AI)

CS 188: Artificial Intelligence

Bayesian Methods in Positioning Applications

STA 4273H: Statistical Machine Learning

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D.

Artificial Intelligence

1 Kalman Filter Introduction

Bayesian Inference: What, and Why?

Machine Learning for OR & FE

COMP 551 Applied Machine Learning Lecture 20: Gaussian processes

CS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model

CS 6140: Machine Learning Spring 2016

Bayesian Inference (A Rough Guide)

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Human Pose Tracking I: Basics. David Fleet University of Toronto

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Bayesian Machine Learning

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Linear Dynamical Systems

Experiment 1: The Same or Not The Same?

(W: 12:05-1:50, 50-N201)

Mobile Robot Localization

Towards a Bayesian model for Cyber Security

Sampling Based Filters

Dynamic models 1 Kalman filters, linearization,

PMR Learning as Inference

Introduction to Bayesian inference

A latent variable modelling approach to the acoustic-to-articulatory mapping problem

CSC321 Lecture 18: Learning Probabilistic Models

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Mobile Robotics II: Simultaneous localization and mapping

Generative Clustering, Topic Modeling, & Bayesian Inference

Probabilistic and Bayesian Machine Learning

An Embedded Implementation of Bayesian Network Robot Programming Methods

Human-Oriented Robotics. Probability Refresher. Kai Arras Social Robotics Lab, University of Freiburg Winter term 2014/2015

Based on slides by Richard Zemel

Probabilistic Robotics

Bayesian Inference. Chris Mathys Wellcome Trust Centre for Neuroimaging UCL. London SPM Course

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Partially Observable Markov Decision Processes (POMDPs)

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems

Part 1: Expectation Propagation

Expectation Propagation for Approximate Bayesian Inference

Transcription:

Robotics Lecture 4: Probabilistic Robotics See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London

Review: Sensors Practical from Lecture 3 Touch sensor (returns yes/no state): use to detect collision and trigger avoidance action. Sonar sensor (returns depth value in cm): can be used for smooth servoing behaviour with proportional gain. Both are examples of negative feedback.

Review: Wall Following with Sonar d z Use sideways-looking sonar to measure distance z to wall. Use velocity control and a while loop at for instance 20Hz. With the goal of maintaining a desired distance d, set difference between left and right wheel velocities proportional to difference between z and d: v R v L = K p (z d) Symmetric behaviour can therefore be achieved using a constant offset v C : v R = v C + 1 2 K p(z d) v L = v C 1 2 K p(z d)

Review: Wall Following with Sonar d z Problem if angle between robot s direction and wall gets too large because sonar doesn t measure the perpendicular distance. Solutions: ring of sonar sensors would be most straightforward. Clever combination of measurements from different times? A simple way to a better is to mount the sonar forward of the wheels. This couples rotation and the distance from the wall: z d

Probabilistic Localisation Over the next two weeks we will aim beyond local reactive behaviours towards reliable long-term navigation, via localisation relative to a map. A B b a c E e f F y θ C D d g O x h G

Probabilistic Robotics Problem: simple sensing/action procedures can be locally effective but are limited in complicated problems in the real-world; we need longer-term representations and consistent scene models. Classical AI approaches to long-term estimation based on logical reasoning about true/false statements fall down when presented with real-world data. Why? Advanced sensors don t lend themselves to straightforward analysis like bump and light sensors. All information which a robot receives is uncertain. A probabilistic approach acknowledges uncertainty and uses models to abstract useful information from data. Our goal is an incrementally updated probabilistic estimate of the position of the robot relative to the map.

Uncertainty in Robotics Initial position estimate: Motion estimate: Feature depth measurement: x = 3.34m σ x = 0.10m m = 1.00m σ m = 0.05m z = 0.80m σ z = 0.02m Every robot action is uncertain. Every sensor measurement is uncertain. When we combine actions and measurements and want to estimate the state of a robot, the state estimate will be uncertain. Usually, we will start with some uncertain estimate of the state of a robot, and then take some action and receive new information from sensors. We need to update the uncertain state estimate in response to this.

Probabilistic Inference u1 u 2 u 3 x 0 x 1 x 2 x 3 z1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 y1 y 2 y 3 What is my state and that of the world around me? Prior knowledge is combined with new measurements; most generally modelled as a Bayesian Network. A series of weighted combinations of old and new information. Sensor fusion: the general process of combining data from many different sources into useful estimates. This composite state estimate can then be used to decide on the robot s next action.

Bayesian Probabilistic Inference Bayesian has come to be known as a certain view of the meaning of probability theory as a measure of subjective belief. Probabilities describe our state of knowledge nothing to do with randomness in the world. Bayes Rule relating probabilities of discrete statements: P(XZ) = P(Z X)P(X) = P(X Z)P(Z) P(X Z) = P(Z X)P(X) P(Z) Here P(X) is the prior; P(Z X) the likelihood; P(X Z) the posterior; P(Z) sometimes called marginal likelihood. We use Bayes s Rule to incrementally digest new information from sensors about a robot s state. Straightforward use for discrete inference where X and Z each have values which are one of several labels such as the identity of the room a robot is in.

Probability Density Probability Density Probability Density Probability Distributions: Discrete and Continuous Discrete probabilistic inference generalises to large numbers of possible states as we make the bin size smaller and smaller. A continuous Probability Density Function p(x) is the limiting case as the widths of bins in a discrete histogram tend to zero. m 1 20 0.21 0.24 0.19 m 1 20 0.0600 0.0550 0.0500 0.0450 0.0500 0.0575 0.0650 0.0675 0.0550 0.0450 0.0450 0.0450 0.0425 m 1 20 10 0 0.10 0.03 0.01 13.21 13.20 13.19 13.18 13.22 13.23 0.14 0.06 0.02 13.27 13.26 13.25 13.24 m 10 0 13.18 0.0100 0.0100 0.0050 0.0050 0.0050 0.0025 0.0025 0.0000 13.19 13.20 0.0325 0.0225 0.0300 0.0150 13.21 13.22 13.23 13.24 0.0275 0.0300 0.0325 0.0350 13.25 0.0050 0.0075 0.0200 13.26 0.0025 0.0050 0.0050 0.0075 13.27 m 10 0 13.18 13.19 13.20 13.21 13.22 13.23 13.24 13.25 13.26 13.27 m Robot Position Robot Position Robot Position The probability that a continuous parameter lies in the range a to b is given by the area under the curve: b P a b p(x)dx a But generic high resolution representation of probability density is very expensive in terms of memory and computation.

Probability Representations: Gaussians Prior Likelihood Posterior p(x) = 1 e (x µ)2 2σ 2 2πσ Explicit Gaussian (or normal) distributions are often represent the uncertainty in sensor measurements very well. Wide Gaussian prior multiplied by likelihood curve to produce a posterior which is tighter than either. The product of two Gaussians is always another Gaussian.

Probability Representations: Particles Prior Posterior Here a probability distribution is represented by a finite set of weighted samples of the state {x i, w i }, where i w i = 1. Big advantages are simplicity and the ability to represent and shape of distribution, including multi-modal distributions (with more than one peak) in ambigiuous situations. Disadvantages are a poor ability to represent detailed shape of distribution when number of particles is low. If we increase the number of particles to improve this, the computational cost can be very high.

Probabilistic Localisation The robot has a map of its environment in advance. The only uncertain thing is the position of the robot. W y R x R y (up) R y R Coordinate Frame Carried With Robot Camera Frame R R x (left) W θ y W R z (forward) x W r h L Fixed World Coordinate Frame W x W World Frame W z W y The robot stores and updates a probability distribution representing its uncertain position estimate.

Monte Carlo Localisation (Particle Filter) Cloud of particles represent uncertain robot state: more particles in a region = more probability that the robot is there. (Dieter Fox et al.1999, using sonar. See animated gif at http://www.doc.ic.ac.uk/~ajd/robotics/roboticsresources/ montecarlolocalization.gif.)

The Particle Distribution A particle is a point estimate x i of the state (position) of the robot with a weight w i. x i x i = y i θ i The full particle set is: {x i, w i }, for i = 1 to N. A typical value might be N = 100. All weights should add up to 1. If so, the distribution is said to be normalised: N w i = 1. i=1

Displaying a Particle Set We can visualise the particle set by plotting the x and y coordinates as a set of dots; more difficult to visualise the θ angular distribution (perhaps with arrows?) but we can get the main idea just from the linear components.

Steps in Particle Filtering These steps are repeated every time the robot moves a little and makes measurements: 1. Motion Prediction based on Proprioceptive Sensors. 2. Measurement Update based on Outward-Looking Sensors. 3. Normalisation. 4. Resampling.

Motion Prediction We know that uncertainty grows during blind motion. So when the robot makes a movement, the particle distribution needs to shift its mean position but also spread out. We achieve this by passing the state part of each particle through a function which has a deterministic component and a random component.

Motion Prediction During a straight-line period of motion of distance D: x new x + (D + e) cos θ y new θ new = y + (D + e) sin θ θ + f During a pure rotation of angle angle α: x new x y new = y θ new θ + α + g Here e, f and g are zero mean noise terms i.e. random numbers typically with a Gaussian distribution. We generate a different samples for each particle, which causes the particles to spread out. Watch out for angular wrap-around i.e. make sure that θ values are always in the range 0 to 360. How to set the size of e, f, g? Make some initial estimates; then adjust by looking at the particle spread over an extended motion and match the distribution to experiments (see practical sheet).

Measurement Updates A measurement update consists of applying Bayes Rule to each particle; remember: P(X Z) = P(Z X)P(X) P(Z) So when we achieve a measurement z, we update the weight of each particle as follows: w i(new) = P(z x i ) w i, remembering that the denominator in Bayes rule is a constant factor which we do not need to calculate because it will later be removed by normalisation. P(z x i ) is the likelihood of particle i; the probability of getting measurement z given that it represents the true state.

Likelihood Function The form of a likelihood function comes from a probabilistic model of the outward-looking sensor. Having calibrated a sensor and understood the uncertainty in its measurements we can build a probabilistic measurement model for how it works. This will be a probability distribution (specifically a likelihood function) of the form: P(z x i ) Such a distribution will often have a Gaussian shape.