Probabilistic Robotics

Similar documents
Introduction to Mobile Robotics Probabilistic Robotics

Probabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.

Artificial Intelligence

Uncertainty. Outline

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Uncertainty. Chapter 13

Uncertainty. Chapter 13

Artificial Intelligence Uncertainty

Web-Mining Agents Data Mining

Pengju XJTU 2016

Uncertainty. Chapter 13, Sections 1 6

CS 561: Artificial Intelligence

COMP9414/ 9814/ 3411: Artificial Intelligence. 14. Uncertainty. Russell & Norvig, Chapter 13. UNSW c AIMA, 2004, Alan Blair, 2012

Uncertainty. Outline. Probability Syntax and Semantics Inference Independence and Bayes Rule. AIMA2e Chapter 13

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004

Uncertain Knowledge and Reasoning

Resolution or modus ponens are exact there is no possibility of mistake if the rules are followed exactly.

Uncertainty. Russell & Norvig Chapter 13.

Uncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability

Outline. Uncertainty. Methods for handling uncertainty. Uncertainty. Making decisions under uncertainty. Probability. Uncertainty

n How to represent uncertainty in knowledge? n Which action to choose under uncertainty? q Assume the car does not have a flat tire

Uncertainty (Chapter 13, Russell & Norvig) Introduction to Artificial Intelligence CS 150 Lecture 14

Uncertainty. Chapter 13. Chapter 13 1

Probabilistic Reasoning

Probability theory: elements

Where are we in CS 440?

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Uncertainty and Bayesian Networks

Quantifying uncertainty & Bayesian networks

Mobile Robotics II: Simultaneous localization and mapping

Where are we in CS 440?

Fusion in simple models

Basic Probability and Decisions

Uncertainty (Chapter 13, Russell & Norvig)

Chapter 13 Quantifying Uncertainty

Uncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial

Probabilistic Reasoning

UNCERTAINTY. In which we see what an agent should do when not all is crystal-clear.

Reasoning with Uncertainty. Chapter 13

ARTIFICIAL INTELLIGENCE. Uncertainty: probabilistic reasoning

Ch.6 Uncertain Knowledge. Logic and Uncertainty. Representation. One problem with logical approaches: Department of Computer Science

PROBABILISTIC REASONING Outline

Basic Probability. Robert Platt Northeastern University. Some images and slides are used from: 1. AIMA 2. Chris Amato

Pengju

Probabilistic Reasoning. Kee-Eung Kim KAIST Computer Science

Recursive Bayes Filtering

Reasoning under Uncertainty: Intro to Probability

CS 5100: Founda.ons of Ar.ficial Intelligence

CS491/691: Introduction to Aerial Robotics

CS 188: Artificial Intelligence. Our Status in CS188

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Uncertainty. CmpE 540 Principles of Artificial Intelligence Pınar Yolum Uncertainty. Sources of Uncertainty

Probability and Decision Theory

Cartesian-product sample spaces and independence

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Our Status. We re done with Part I Search and Planning!

CS 188: Artificial Intelligence Fall 2009

Consider an experiment that may have different outcomes. We are interested to know what is the probability of a particular set of outcomes.

Quantifying Uncertainty & Probabilistic Reasoning. Abdulla AlKhenji Khaled AlEmadi Mohammed AlAnsari

Computer Science CPSC 322. Lecture 18 Marginalization, Conditioning

Brief Intro. to Bayesian Networks. Extracted and modified from four lectures in Intro to AI, Spring 2008 Paul S. Rosenbloom

Web-Mining Agents. Prof. Dr. Ralf Möller Dr. Özgür Özçep. Universität zu Lübeck Institut für Informationssysteme. Tanya Braun (Lab Class)

CSE 473: Artificial Intelligence

Lecture Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 11: Uncertainty. Uncertainty.

Mathematical Formulation of Our Example

Probabilistic representation and reasoning

Reasoning under Uncertainty: Intro to Probability

COMP5211 Lecture Note on Reasoning under Uncertainty

Discrete Probability and State Estimation

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

Artificial Intelligence Programming Probability

Approximate Inference

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Artificial Intelligence CS 6364

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Probability Foundations. Rudolf Kruse, Alexander Dockhorn Bayesian Networks 34

: A sea change in AI technologies

Bayesian Reasoning. Adapted from slides by Tim Finin and Marie desjardins.

Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule

Our Status in CSE 5522

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

PROBABILITY. Inference: constraint propagation

Modeling and reasoning with uncertainty

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Probabilistic Models

Discrete Probability and State Estimation

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Probabilistic representation and reasoning

Markov Models and Hidden Markov Models

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Probability and Uncertainty. Bayesian Networks

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

Introduction to Artificial Intelligence (AI)

CSE 473: Artificial Intelligence

CS 343: Artificial Intelligence

Probabilistic Reasoning Systems

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Probability Review Lecturer: Ji Liu Thank Jerry Zhu for sharing his slides

Transcription:

Probabilistic Robotics Overview of probability, Representing uncertainty Propagation of uncertainty, Bayes Rule Application to Localization and Mapping Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al. )

Probabilistic Robotics Key idea: Explicit representation of uncertainty using the calculus of probability theory Perception = state estimation Action = utility optimization

Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1. partial observability (road state, other drivers' plans, etc.) 2. noisy sensors (traffic reports) 3. uncertainty in action outcomes (flat tire, etc.) 4. immense complexity of modeling and predicting traffic Hence a purely logical approach either 1. risks falsehood: A 25 will get me there on time, or 2. leads to conclusions that are too weak for decision making: A 25 will get me there on time if there's no accident on the bridge and it doesn't rain and my tires remain intact etc etc. (A 1440 might reasonably be said to get me there on time but I'd have to stay overnight in the airport )

Methods for handling uncertainty Rules with fudge factors: A 25 0.3 get there on time Sprinkler 0.99 WetGrass WetGrass 0.7 Rain Issues: Problems with combination, e.g., Sprinkler causes Rain?? Probability Model agent's degree of belief Given the available evidence, A 25 will get me there on time with probability 0.04

Axioms of Probability Theory Pr(A) denotes probability that proposition A is true.

A Closer Look at Axiom 3

Using the Axioms

Syntax Basic element: random variable Similar to propositional logic: possible worlds defined by assignment of values to random variables. Boolean random variables e.g., Cavity (do I have a cavity?) Discrete random variables e.g., Weather is one of <sunny,rainy,cloudy,snow> Domain values must be exhaustive and mutually exclusive Elementary proposition constructed by assignment of a value to a random variable: e.g., Weather = sunny, Cavity = false (abbreviated as cavity) Complex propositions formed from elementary propositions and standard logical connectives e.g., Weather = sunny Cavity = false

Syntax Atomic event: A complete specification of the state of the world about which the agent is uncertain E.g., if the world consists of only two Boolean variables Cavity and Toothache, then there are 4 distinct atomic events: Cavity = false Toothache = false Cavity = false Toothache = true Cavity = true Toothache = false Cavity = true Toothache = true Atomic events are mutually exclusive and exhaustive

Prior probability Prior or unconditional probabilities of propositions e.g., P(Cavity = true) = 0.1 and P(Weather = sunny) = 0.72 correspond to belief prior to arrival of any (new) evidence Probability distribution gives values for all possible assignments: P(Weather) = <0.72,0.1,0.08,0.1> (normalized, i.e., sums to 1) Joint probability distribution for a set of random variables gives the probability of every atomic event on those random variables P(Weather,Cavity) = a 4 2 matrix of values: Weather = sunny rainy cloudy snow Cavity = true 0.144 0.02 0.016 0.02 Cavity = false 0.576 0.08 0.064 0.08

Joint Distribution Weather = sunny rainy cloud snow Cavity = true 0.144 0.02 0.016 0.02 Cavity = false 0.576 0.08 0.064 0.08 Every question about the domain can be answered from joint probability distribution

Joint distribution Example of joint probability distribution:

Conditional probability Definition of conditional probability: P(a b) P(a b) = P(b) Product rule gives an alternative formulation: P(a b) = P(a b)p(b) = P(b a)p(a) A general version holds for whole distributions, e.g., P(Weather,Cavity) = P(Weather Cavity) P(Cavity) (View as a set of 4 2 equations, not matrix multiplication) Chain rule is derived by successive product rule P(X 1,,X n ) = P(X 1,...,X n-1 ) P(X n X 1,...,X n-1 ) = P(X 1,...,X n-2 ) P(X n-1 X 1,...,X n-2 ) P(X n X 1,...,X n-1 ) = n = P(X i X 1,K, X i 1 ) i=1

Inference by enumeration Start with the joint probability distribution: For any proposition φ, sum the atomic events where it is true: P(φ) = Σ ω:ω φ P(ω)

Inference by enumeration Start with the joint probability distribution: For any proposition φ, sum the atomic events where it is true: P(φ) = Σ ω:ω φ P(ω) P(toothache) = 0.108 + 0.012 + 0.016 + 0.064 = 0.2

Inference by enumeration Start with the joint probability distribution: For any proposition φ, sum the atomic events where it is true: P(φ) = Σ ω:ω φ P(ω) P(toothache) = 0.108 + 0.012 + 0.016 + 0.064 = 0.2

Inference by enumeration Start with the joint probability distribution: Can also compute conditional probabilities: P( cavity toothache) = P( cavity toothache) P(toothache) = (0.016 + 0.064) 0.108 + 0.012 + 0.016 + 0.064 = 0.4

Discrete Random Variables X denotes a random variable. X can take on a countable number of values in {x 1, x 2,, x n }. P(X=x i ), or P(x i ), is the probability that the random variable X takes on value x i. P( ) is called probability mass function. E.g. This is just shorthand for P(Room = office), P(Room = kitchen), P(Room = bedroom), P(Room = corridor)

Continuous Random Variables X takes on values in the continuum. p(x=x), or p(x), is a probability density function. E.g.

Joint and Conditional Probability P(X=x and Y=y) = P(x,y) If X and Y are independent then P(x,y) = P(x) P(y) P(x y) is the probability of x given y P(x y) = P(x,y) / P(y) P(x,y) = P(x y) P(y) If X and Y are independent then P(x y) = P(x) (verify using definitions of conditional probability and independence)

Law of Total Probability, Marginals Discrete case Law of total probability Continuous case Marginalization

Inference by enumeration Start with the joint probability distribution: For any proposition φ, sum the atomic events where it is true: P(φ) = Σ ω:ω φ P(ω)

Bayes Formula

Normalization Algorithm:

Bayes Rule with Background Knowledge

Conditional Independence Equivalent to and But this does not necessarily mean (independence/marginal independence)

Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open z)?

Causal vs. Diagnostic Reasoning P(open z) is diagnostic P(z open) is causal Often causal knowledge is easier to obtain Bayes rule allows us to use causal knowledge: count frequencies!

Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 z raises the probability that the door is open

Combining Evidence Suppose our robot obtains another observation z 2 How can we integrate this new information? More generally, how can we estimate P(x z 1...z n )?

Recursive Bayesian Updating Markov assumption: z n is independent of z 1,...,z n-1 if we know x

Example: Second Measurement P(z 2 open) = 0.5 P(z 2 open) = 0.6 P(open z 1 )=2/3 z 2 lowers the probability that the door is open

Actions Often the world is dynamic since actions carried out by the robot, actions carried out by other agents, or just the time passing by change the world How can we incorporate such actions?

Typical Actions The robot turns its wheels to move The robot uses its manipulator to grasp an object Plants grow over time Actions are never carried out with absolute certainty In contrast to measurements, actions generally increase the uncertainty

Modeling Actions To incorporate the outcome of an action u into the current belief, we use the conditional pdf P(x u,x ) This term specifies the pdf that executing u changes the state from x to x.

Example: Closing the door

State Transitions P(x u,x ) for u = close door : If the door is open, the action close door succeeds in 90% of all cases

Integrating the Outcome of Actions Continuous case: Discrete case:

Example: The Resulting Belief

Bayes Filters: Framework Given: Stream of observations z and action data u: Sensor model P(z x) Action model P(x u,x ) Prior probability of the system state P(x) Wanted: Estimate of the state X of a dynamical system The posterior of the state is also called Belief:

Markov Assumption Underlying Assumptions Static world Independent noise Perfect model, no approximation errors

Bayes Filters z = observation u = action x = state Bayes Markov Total prob. Markov Markov

Bayes Filter Algorithm 1. Algorithm Bayes_filter( Bel(x),d ): 2. η=0 3. If d is a perceptual data item z then 4. For all x do 5. 6. 7. For all x do 8. 9. Else if d is an action data item u then 10. For all x do 11. 12. Return Bel (x)

Bayes Filters are Familiar! Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)

Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.