Introduction to Mobile Robotics Probabilistic Robotics

Similar documents
Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Probabilistic Robotics

Probabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.

Mobile Robotics II: Simultaneous localization and mapping

Recursive Bayes Filtering

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

CS491/691: Introduction to Aerial Robotics

Mathematical Formulation of Our Example

Probability: Review. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

Introduction to Mobile Robotics Bayes Filter Kalman Filter

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

CSE 473: Artificial Intelligence

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Mobile Robot Localization

CSE 473: Artificial Intelligence Spring 2014

Lecture 2: From Linear Regression to Kalman Filter and Beyond

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

CSEP 573: Artificial Intelligence

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Mobile Robot Localization

Robot Localization and Kalman Filters

CS 4649/7649 Robot Intelligence: Planning

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Discrete Probability and State Estimation

Introduction to Mobile Robotics Probabilistic Motion Models

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Approximate Inference

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Probabilistic Robotics

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

6.231 DYNAMIC PROGRAMMING LECTURE 7 LECTURE OUTLINE

Markov Chains and Hidden Markov Models

COMP5211 Lecture Note on Reasoning under Uncertainty

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Reasoning under Uncertainty: Intro to Probability

2D Image Processing. Bayes filter implementation: Kalman filter

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Autonomous Mobile Robot Design

CS 5522: Artificial Intelligence II

STAT 598L Probabilistic Graphical Models. Instructor: Sergey Kirshner. Probability Review

Dipartimento di Elettronica Informazione e Bioingegneria Cognitive Robotics

Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule

CS 5522: Artificial Intelligence II

Bayesian Reasoning. Adapted from slides by Tim Finin and Marie desjardins.

Bayesian Networks BY: MOHAMAD ALSABBAGH

Artificial Intelligence Uncertainty

Chris Bishop s PRML Ch. 8: Graphical Models

Conditional probabilities and graphical models

Foundations of Artificial Intelligence

Discrete Probability and State Estimation

Our Status in CSE 5522

Intro to Probability. Andrei Barbu

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

CS532, Winter 2010 Hidden Markov Models

CS 343: Artificial Intelligence

01 Probability Theory and Statistics Review

An Embedded Implementation of Bayesian Network Robot Programming Methods

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Introduction to Probabilistic Graphical Models: Exercises

2D Image Processing. Bayes filter implementation: Kalman filter

UNCERTAINTY. In which we see what an agent should do when not all is crystal-clear.

Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Anno accademico 2006/2007. Davide Migliore

CS 188: Artificial Intelligence Spring 2009

PROBABILISTIC REASONING OVER TIME

Linear Dynamical Systems

Partially Observable Markov Decision Processes (POMDPs)

Hidden Markov Models,99,100! Markov, here I come!

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Uncertain Reasoning. Environment Description. Configurations. Models. Bayesian Networks

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD

Hidden Markov Models (recap BNs)

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems

Probabilistic Robotics Sebastian Thrun-- Stanford

Dynamic models 1 Kalman filters, linearization,

Modeling and reasoning with uncertainty

CS 188: Artificial Intelligence Spring Announcements

Recall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem

Naïve Bayes classification

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

COS402- Artificial Intelligence Fall Lecture 10: Bayesian Networks & Exact Inference

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Grundlagen der Künstlichen Intelligenz

Sheet 4 solutions. May 15, 2017

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Artificial Intelligence Programming Probability

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Probabilistic Robotics, Sebastian Thrun, 2005 Page 36, 37. <1 x = nf ) =1/ 3 <1 x = f ) =1 p(x = f ) = 0.01 p(x = nf ) = 0.

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Uncertainty. Chapter 13

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Transcription:

Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1

Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action = state estimation = utility optimization 2

Axioms of Probability Theory P(A) denotes probability that proposition A is true. 3

A Closer Look at Axiom 3 True A A B B B 4

Using the Axioms 5

Discrete Random Variables X denotes a random variable X can take on a countable number of values in {x 1, x 2,, x n } P(X=x i ) or P(x i ) is the probability that the random variable X takes on value x i P(.) is called probability mass function E.g. 6

Continuous Random Variables X takes on values in the continuum. p(x=x) or p(x) is a probability density function E.g. p(x) x 7

Probability Sums up to One Discrete case Continuous case 8

Joint and Conditional Probability P(X=x and Y=y) = P(x,y) If X and Y are independent then P(x,y) = P(x) P(y) P(x y) is the probability of x given y P(x y) = P(x,y) / P(y) P(x,y) = P(x y) P(y) If X and Y are independent then P(x y) = P(x) 9

Law of Total Probability Discrete case Continuous case 10

Marginalization Discrete case Continuous case 11

Bayes Formula 12

Normalization Algorithm: 13

Bayes Rule with Background Knowledge 14

Conditional Independence P ( x, y z) = P( x z) P( y z) Equivalent to and P ( x z) = P( x z, y) P ( y z) = P( y z, x) But this does not necessarily mean P ( x, y ) = P( x) P( y) (independence/marginal independence) 15

Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open z)? 16

Causal vs. Diagnostic Reasoning P(open z) is diagnostic P(z open) is causal In some situations, causal knowledge is easier to obtain count frequencies! Bayes rule allows us to use causal knowledge: P ( open z) = P( z open) P( open) P( z) 17

Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 P( open z) = P( z open) P( z open) P( open) p( open) + P( z open) p( open) P( open z) = 0.6 0.5 0.6 0.5 + 0.3 0.5 = 0.3 0.3+ 0.15 = 0.67 z raises the probability that the door is open 18

Combining Evidence Suppose our robot obtains another observation z 2 How can we integrate this new information? More generally, how can we estimate P(x z 1,..., z n )? 19

Recursive Bayesian Updating Markov assumption: z n is independent of z 1,...,z n-1 if we know x 20

Example: Second Measurement P(z 2 open) = 0.25 P(z 2 open) = 0.3 P(open z 1 )=2/3 z 2 lowers the probability that the door is open 21

Actions Often the world is dynamic since actions carried out by the robot, actions carried out by other agents, or just the time passing by change the world How can we incorporate such actions? 23

Typical Actions The robot turns its wheels to move The robot uses its manipulator to grasp an object Plants grow over time Actions are never carried out with absolute certainty In contrast to measurements, actions generally increase the uncertainty 24

Modeling Actions To incorporate the outcome of an action u into the current belief, we use the conditional pdf P(x u, x ) This term specifies the pdf that executing u changes the state from x to x. 25

Example: Closing the door 26

State Transitions P(x u, x ) for u = close door : 0.9 0.1 open closed 1 0 If the door is open, the action close door succeeds in 90% of all cases 27

Integrating the Outcome of Actions Continuous case: Discrete case: We will make an independence assumption to get rid of the u in the second factor in the sum. 28

Example: The Resulting Belief 29

Bayes Filters: Framework Given: Stream of observations z and action data u: Sensor model P(z x) Action model P(x u, x ) Prior probability of the system state P(x) Wanted: Estimate of the state X of a dynamical system The posterior of the state is also called Belief: 30

Markov Assumption Underlying Assumptions Static world Independent noise Perfect model, no approximation errors 31

Bayes Filters z = observation u = action x = state Bayes Markov Total prob. Markov Markov 32

Bel x ηp z x ( t ) = ( t t ) ( t t, t 1 ) ( t 1) dxt 1 Bayes Filter Algorithm P x u x Bel x 1. Algorithm Bayes_filter(Bel(x), d): 2. η=0 3. If d is a perceptual data item z then 4. For all x do 5. 6. 7. For all x do 8. 9. Else if d is an action data item u then 10. For all x do 11. 12. Return Bel (x) 33

Bayes Filters are Familiar! Bel x ηp z x ( t ) = ( t t ) ( t t, t 1 ) ( t 1) t 1 P x u x Bel x dx Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs) 34

Probabilistic Localization

Probabilistic Localization

Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems. 37