Probabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.

Similar documents
Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Robotics

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Artificial Intelligence Uncertainty

Artificial Intelligence

Uncertainty. Chapter 13

Uncertainty Chapter 13. Mausam (Based on slides by UW-AI faculty)

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

Uncertainty. Outline

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004

Probability: Review. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Artificial Intelligence CS 6364

Mobile Robotics II: Simultaneous localization and mapping

This lecture. Reading. Conditional Independence Bayesian (Belief) Networks: Syntax and semantics. Chapter CS151, Spring 2004

Uncertainty. Chapter 13

Uncertainty and Belief Networks. Introduction to Artificial Intelligence CS 151 Lecture 1 continued Ok, Lecture 2!

Quantifying uncertainty & Bayesian networks

Probability theory: elements

Where are we in CS 440?

Basic Probability. Robert Platt Northeastern University. Some images and slides are used from: 1. AIMA 2. Chris Amato

Dipartimento di Elettronica Informazione e Bioingegneria Cognitive Robotics

Reasoning under Uncertainty: Intro to Probability

Artificial Intelligence Programming Probability

Pengju XJTU 2016

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

CS 188: Artificial Intelligence. Our Status in CS188

Where are we in CS 440?

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty

Fusion in simple models

Mathematical Formulation of Our Example

CS491/691: Introduction to Aerial Robotics

Our Status in CSE 5522

Recursive Bayes Filtering

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

Uncertainty and Bayesian Networks

Our Status. We re done with Part I Search and Planning!

Probabilistic representation and reasoning

CS 188: Artificial Intelligence Fall 2009

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Markov Models and Hidden Markov Models

10/15/2015 A FAST REVIEW OF DISCRETE PROBABILITY (PART 2) Probability, Conditional Probability & Bayes Rule. Discrete random variables

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Computer Science CPSC 322. Lecture 18 Marginalization, Conditioning

CS 343: Artificial Intelligence

CSE 473: Artificial Intelligence

PROBABILITY. Inference: constraint propagation

COMP9414/ 9814/ 3411: Artificial Intelligence. 14. Uncertainty. Russell & Norvig, Chapter 13. UNSW c AIMA, 2004, Alan Blair, 2012

CS 188: Artificial Intelligence Fall 2011

Reasoning under Uncertainty: Intro to Probability

Pengju

Probabilistic Reasoning

Basic Probability and Decisions

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Approximate Inference

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Reinforcement Learning Wrap-up

Uncertainty. Chapter 13. Chapter 13 1

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

Web-Mining Agents Data Mining

CSEP 573: Artificial Intelligence

Uncertainty. Chapter 13, Sections 1 6

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Probabilistic Reasoning

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Bayesian Networks BY: MOHAMAD ALSABBAGH

Introduction to Artificial Intelligence (AI)

Chapter 13 Quantifying Uncertainty

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Answering. Question Answering: Result. Application of the Theorem Prover: Question. Overview. Question Answering: Example.

Probabilistic representation and reasoning

Review of Probability Mark Craven and David Page Computer Sciences 760.

Outline. Uncertainty. Methods for handling uncertainty. Uncertainty. Making decisions under uncertainty. Probability. Uncertainty

Uncertainty. Outline. Probability Syntax and Semantics Inference Independence and Bayes Rule. AIMA2e Chapter 13

Uncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability

Modeling and reasoning with uncertainty

UNCERTAINTY. In which we see what an agent should do when not all is crystal-clear.

CS 687 Jana Kosecka. Uncertainty, Bayesian Networks Chapter 13, Russell and Norvig Chapter 14,

CS 188: Artificial Intelligence Spring 2009

Probabilistic Robotics Sebastian Thrun-- Stanford

CSE 473: Artificial Intelligence Spring 2014

CS 188: Artificial Intelligence Spring Announcements

Bayesian Reasoning. Adapted from slides by Tim Finin and Marie desjardins.

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Uncertainty (Chapter 13, Russell & Norvig) Introduction to Artificial Intelligence CS 150 Lecture 14

CS 5100: Founda.ons of Ar.ficial Intelligence

CSE 473: Artificial Intelligence

Uncertain Knowledge and Reasoning

Cartesian-product sample spaces and independence

Uncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial

Anno accademico 2006/2007. Davide Migliore

CS 5522: Artificial Intelligence II

Resolution or modus ponens are exact there is no possibility of mistake if the rules are followed exactly.

Probabilistic Reasoning. (Mostly using Bayesian Networks)

CS 5522: Artificial Intelligence II

Reasoning Under Uncertainty

Y. Xiang, Inference with Uncertain Knowledge 1

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Transcription:

robabilistic Robotics Slides from Autonomous Robots Siegwart and Nourbaksh Chapter 5 robabilistic Robotics S. Thurn et al.

Today Overview of probability Representing uncertainty ropagation of uncertainty Bayes Rule Application to Localiation and Mapping CS 685 2

robabilistic Robotics Key idea: Eplicit representation of uncertainty using the calculus of probability theory! erception state estimation! Action utility optimiation

Aioms of robability Theory ra denotes probability that proposition A is true 0 r A r True r False 0 r A B r A + r B r A B

A Closer Look at Aiom 3 r A B r A + r B r A B True A A B B B

Synta! Basic element: random variable! Similar to propositional logic: possible worlds defined by assignment of values to random variables.! Boolean random variables e.g. Cavity do I have a cavity?! Discrete random variables e.g. Weather is one of <sunnyrainycloudysnow>! Domain values must be ehaustive and mutually eclusive! Elementary proposition constructed by assignment of a value to a random variable: e.g. Weather sunny Cavity false abbreviated as cavity! Comple propositions formed from elementary propositions and standard logical connectives e.g. Weather sunny Cavity false

Synta! Atomic event: A complete specification of the state of the world about which the agent is uncertain E.g. if the world consists of only two Boolean variables Cavity and Toothache then there are 4 distinct atomic events: Cavity false Toothache false Cavity false Toothache true Cavity true Toothache false Cavity true Toothache true! Atomic events are mutually eclusive and ehaustive

rior probability! rior or unconditional probabilities of propositions Cavity true 0. and Weather sunny 0.72 correspond to belief prior to arrival of any new evidence! robability distribution values for all possible assignments: Weather <0.720.0.080.> normalied i.e. sums to! Joint probability distribution for a set of random variables gives the probability of every atomic event on those random variables WeatherCavity a 4 2 matri of values: Weather sunny rainy cloudy snow Cavity true 0.44 0.02 0.06 0.02 Cavity false 0.576 0.08 0.064 0.08

Joint Distribution! Every question about the domain can be answered from joint probability distribution! Two eamples of joint distributions Weather sunny rainy cloudy snow Cavity true 0.44 0.02 0.06 0.02 Cavity false 0.576 0.08 0.064 0.08

Conditional probability! Definition of conditional probability: a b a b b! roduct rule gives an alternative formulation: a b a bb b aa! A general version holds for whole distributions e.g. WeatherCavity Weather Cavity Cavity! View as a set of 4 2 equations not matri multiplication! Chain rule is derived by successive product rule X X n X...X n- X n X...X n- X...X n-2 X n- X...X n-2 X n X...X n- n X i X X i i

Inference by enumeration! Start with the joint probability distribution:! Given a proposition calculate what is its probability! For any proposition φ sum the atomic events where it is true: φ Σ ω:ω φ ω

Inference by enumeration! Start with the joint probability distribution:! For any proposition φ sum the atomic events where it is true: φ Σ ω:ω φ ω! toothache 0.08 + 0.02 + 0.06 + 0.064 0.2

Inference by enumeration! Start with the joint probability distribution:! Can also compute conditional probabilities: cavity toothache cavity toothache toothache 0.06 + 0.064 0.08 + 0.02 + 0.06 + 0.064 0.4

Discrete Random Variables! X denotes a random variable.! X can take on a countable number of values in { 2 n }.! X i or i is the probability that the random variable X takes on value i.! is called probability mass function.! E.g. Recognie which room you robot is in! This is just shorthand for Room office Room kitchen Room bedroom Room corridor Room 0.70.20.080.02

Continuous Random Variables! X takes on values in the continuum.! px or p is a probability density function.! E.g. r a b p d p b a

Joint and Conditional robability! X and Yy y! If X and Y are independent then y y! y is the probability of given y y y / y y y y! If X and Y are independent then y verify using definitions of conditional probability and independence

Law of Total robability Marginals Discrete case Law of total probability Marginaliation y y y y y Continuous case p d p p y dy p p y p y dy

Bayes Formula evidence prior likelihood y y y y y y y

Normaliation y y y y y y η η y y y y y au : au : au η η Algorithm:

Bayes' Rule! roduct rule Bayes' rule:! or in distribution form a b a bb b aa Y X a b X YY X b aa b! Useful for assessing diagnostic probability from causal probability: CauseEffect EffectCause Cause / Effect Note: posterior probability of meningitis still very small!

Bayes' Rule! Baye s rule Y X X YY X! More general version conditionalied on some evidence X Y ey e Y X e X e! E.g. let M be meningitis S be stiff neck: m s s mm s! Normaliation same for m and ~m Y X αx Y Y 0.8 0.00 0. 0.008

Bayes Rule with Background Knowledge y y y

Conditional Independence y y! Equivalent to y and y y! But this does not necessarily mean y y independence/marginal independence

Simple Eample of State Estimation! Suppose a robot obtains measurement! What is open?

Causal vs. Diagnostic Reasoning! open is diagnostic! open is causal! Often causal knowledge is easier to obtain! Bayes rule allows us to use causal knowledge: count frequencies! open open open

Eample! open 0.6 open 0.3! open open 0.5 open open open open p open + open p open open 0.6 0.5 0.6 0.5 + 0.3 0.5 0.3 0.3 + 0.5 0.67 " raises the probability that the door is open

Combining Evidence! Suppose our robot obtains another observation 2! How can we integrate this new information?! More generally how can we estimate... n?

Eample: Second Measurement! 2 open 0.5 2 open 0.6! open 2/3 0.625 8 5 5 8 3 5 3 3 3 5 3 3 2 2 3 2 2 2 2 2 2 + + + open open open open open open open 2 lowers the probability that the door is open

Recursive Bayesian Updating n n n n n n Markov assumption: n is independent of... n- if we know...... n i i n n n n n n n n η η

Actions! Often the world is dynamic since! actions carried out by the robot! actions carried out by other agents! or just the time passing by change the world! How can we incorporate such actions?

Typical Actions! The robot turns its wheels to move! The robot uses its manipulator to grasp an object! lants grow over time! Actions are never carried out with absolute certainty! In contrast to measurements actions generally increase the uncertainty

Modeling Actions! To incorporate the outcome of an action u into the current belief we use the conditional pdf u! This term specifies the pdf that eecuting u changes the state from to.

Eample: Closing the door

State Transitions u for u close door : 0.9 0. open closed 0 If the door is open the action close door succeeds in 90% of all cases

Integrating the Outcome of Actions Continuous case: u u ' ' d' Discrete case: u u ' '

Eample: The Resulting Belief 6 8 3 0 8 5 0 ' ' 6 5 8 3 8 5 0 9 ' ' u closed closed closed u open open open u open u open u open closed closed u closed open open u closed u closed u closed + + + +

Bayes Filters: Framework! Given:! Stream of observations and action data u:! Sensor model! Action model u! rior probability of the system state! Wanted: d t { u u! Estimate of the state X of a dynamical system! The posterior of the state is also called Belief: t t } Bel t t u u t t

Markov Assumption p t 0:t :t u :t p t t p t :t :t u :t p t t u t Underlying Assumptions! Static world! Independent noise! erfect model no approimation errors

Bayes Filters observation u action state Bel t t u u t t Bayes η t t u u t t u u t Markov η t t t u u t Total prob. η t t t u u t t t u u t d t Markov η t t t u t t t u u t d t Markov η t t t u t t t u t d t η t t t u t t Bel t d t

Bayes Filter Algorithm. Algorithm Bayes_filter Beld : 2. η0 3. If d is a perceptual data item then 4. For all do 5. 6. 7. For all do 8. 9. Else if d is an action data item u then 0. For all do. Bel t η t t t u t t 2. Return Bel Bel ' Bel η η + Bel' Bel' η Bel' Bel' u ' Bel ' Bel t d t d'

Bayes Filters are Familiar! Bel t η t t t u t t Bel t d t! Kalman filters! article filters! Hidden Markov models! Dynamic Bayesian networks! artially Observable Markov Decision rocesses OMDs

Summary! Bayes rule allows us to compute probabilities that are hard to assess otherwise.! Under the Markov assumption recursive Bayesian updating can be used to efficiently combine evidence.! Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

Belief Representation! a Continuous map with single hypothesis! b Continuous map with multiple hypothesis! d Discretied map with probability distribution! d Discretied topological map with probability distribution R. Siegwart I. Nourbakhsh

Belief Representation: Characteristics! Continuous! recision bound by sensor data! Typically single hypothesis pose estimate! Lost when diverging for single hypothesis! Compact representation and typically reasonable in processing power.! Discrete! recision bound by resolution of discretisation! Typically multiple hypothesis pose estimate! Never lost when diverges converges to another cell! Important memory and processing power needed. not the case for topological maps R. Siegwart I. Nourbakhsh

Bayesian Approach: A taonomy of models More general Bayesian rograms Bayesian Networks Courtesy of Julien Diard S: State O: Observation A: Action S t S t- DBNs S t S t- A t Markov Chains Bayesian Filters Markov Loc MDs article Filters discrete HMMs semi-cont. HMMs continuous HMMs MCML OMDs More specific S t S t- O t Kalman Filters S t S t- O t A t R. Siegwart I. Nourbakhsh