Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Similar documents
Introduction to Mobile Robotics Probabilistic Robotics

Probabilistic Robotics

Probabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.

Recursive Bayes Filtering

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Mobile Robotics II: Simultaneous localization and mapping

Mathematical Formulation of Our Example

CSE 473: Artificial Intelligence

CSEP 573: Artificial Intelligence

CS491/691: Introduction to Aerial Robotics

Introduction to Mobile Robotics Bayes Filter Kalman Filter

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

CS 188: Artificial Intelligence Spring Announcements

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Sensor Fusion: Particle Filter

CSE 473: Artificial Intelligence Spring 2014

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Our Status in CSE 5522

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Approximate Inference

CS 5522: Artificial Intelligence II

Markov Chains and Hidden Markov Models

CS 5522: Artificial Intelligence II

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

CS 188: Artificial Intelligence Spring 2009

Robot Localization and Kalman Filters

Bayesian Networks BY: MOHAMAD ALSABBAGH

2D Image Processing. Bayes filter implementation: Kalman filter

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

CS 343: Artificial Intelligence

PROBABILISTIC REASONING OVER TIME

CS 188: Artificial Intelligence. Our Status in CS188

Hidden Markov Models (recap BNs)

Mobile Robot Localization

Recall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem

Chris Bishop s PRML Ch. 8: Graphical Models

Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

CSE 473: Ar+ficial Intelligence

Markov Models and Hidden Markov Models

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

CS 188: Artificial Intelligence Spring Announcements

CS 343: Artificial Intelligence

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Linear Dynamical Systems

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

Discrete Probability and State Estimation

Partially Observable Markov Decision Processes (POMDPs)

2D Image Processing. Bayes filter implementation: Kalman filter

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Discrete Probability and State Estimation

Introduction to Artificial Intelligence (AI)

Probabilistic Fundamentals in Robotics

Sampling Based Filters

CS 188: Artificial Intelligence Fall 2011

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Our Status. We re done with Part I Search and Planning!

Mobile Robot Localization

Probability: Review. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

6.231 DYNAMIC PROGRAMMING LECTURE 7 LECTURE OUTLINE

Artificial Intelligence

Probabilistic Robotics

CS532, Winter 2010 Hidden Markov Models

Probabilistic Robotics, Sebastian Thrun, 2005 Page 36, 37. <1 x = nf ) =1/ 3 <1 x = f ) =1 p(x = f ) = 0.01 p(x = nf ) = 0.

CS 188: Artificial Intelligence Fall Recap: Inference Example

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Sensor Tasking and Control

CS 4649/7649 Robot Intelligence: Planning

AUTOMOTIVE ENVIRONMENT SENSORS

Final Exam December 12, 2017

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Hidden Markov Models,99,100! Markov, here I come!

CS 188: Artificial Intelligence

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Inference and estimation in probabilistic time series models

Navigation. Global Pathing. The Idea. Diagram I: Overall Navigation

Particle lter for mobile robot tracking and localisation

Planning by Probabilistic Inference

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

2D Image Processing (Extended) Kalman and particle filter

Final Exam December 12, 2017

Hidden Markov models 1

CSE 473: Artificial Intelligence

Introduction to Artificial Intelligence (AI)

Conditional probabilities and graphical models

Dynamic System Identification using HDMR-Bayesian Technique

Transcription:

Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole state space at any time requires a discrete representation of the space (grid). The required memory and calculation power can thus become very important if a fine grid is used. Kalman filter localization tracks the robot and is inherently very precise and efficient. However, if the uncertainty of the robot becomes to large (e.g. collision with an object) the Kalman filter will fail and the position is definitively lost. Markov Localization (1) Markov localization uses an explicit, discrete representation for the probability of all position in the state space. This is usually done by representing the environment by a grid or a topological graph with a finite number of possible states (positions). During each update, the probability for each state (element) of the entire space is updated. 11/16/09 1

Markov Localization (2): Applying probabilty theory to robot localization P(A): Probability that A is true. e.g. p(r t = l): probability that the robot r is at position l at time t (prior) We wish to compute the probability of each individual robot position given actions and sensor measures. P(A B): Conditional probability of A given that we know B. e.g. p(r t = l i t ): probability that the robot is at position l given the sensors input i t. Product rule: Bayes rule: Bayes rule example Suppose a robot obtains measurement z What is P(open z)? 4 11/16/09 2

Bayes rule Causal vs. Diagnostic Reasoning P(open z) is diagnostic. P(z open) is causal. Often causal knowledge is easier to obtain. Bayes rule allows us to use causal knowledge: count frequencies! 5 Bayes Formula aganin 11/16/09 3

Example Consider that we have a random discrete random variable X, with two outcomes characterizing whether the door is open or not. Our initial belief (or prior probability) is bel(x = open) = 0.5 and bel(x = closed) = 0.5 Now suppose that we have some noisy sensors trying make some measurements of the door. The characteristics of the sensors obtained in the training/learning stage are following the sensor can have two outcomes, each with the following conditional probability P(z = sense_open X = open) = 0.6 P(z = sense_closed X = open) = 0.4 P(z = sense_open X = closed) = 0.3 P(z = sense_closed X= closed) = 0.7 This suggests that detecting closed door is relatively reliable (only 0.3 errors), but detecting open door is less reliable Example Suppose you get a measurement z = sense_open now You want to compute probability that the door is open To shorten the notation : P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 z raises the probability that the door is open. 8 11/16/09 4

Combining Evidence Suppose our robot obtains another observation z 2, from other sensing modality which has slightly different conditional probabilities P(z 2 open) = 0.5 P(z 2 open) = 0.6 Which for example is more detail is P(z 2 = sense_open X =open) = 0.5, P(z 2 = sense_open X = open) = 0.6 P(z 2 = sense_closed X =open) = 0.5, P(z 2 = sense_closed X = open) = 0.4 Given these conditional prob. and knowing the prior, you can compute the joint distribution (of z 2 and X and convince your self that all entries sum up to 1 How can we integrate this new information? More generally, how can we estimate P(x z 1...z n )? 9 Recursive Bayesian Updating Markov assumption: z n is independent of z 1,...,z n-1 if we know x. 10 11/16/09 5

Example: Second Measurement P(z 2 open) = 0.5 P(z 2 open) = 0.6 P(open z 1 )=2/3 z 2 lowers the probability that the door is open. 11 Actions Often the world is dynamic since actions carried out by the robot, actions carried out by other agents, or just the time passing by change the world. How can we incorporate such actions? How to model them explicitely? 11/16/09 6

Typical Actions The robot turns its wheels to move The robot uses its manipulator to grasp an object Plants grow over time Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty. Modeling Actions To incorporate the outcome of an action u into the current belief, we use the conditional pdf P(x u,x ) This term specifies the pdf that executing u changes the state from x to x. 14 11/16/09 7

State Transitions P(x u,x ) for u = close door : If the door is open, the action close door succeeds in 90% of all cases. 15 Integrating the Outcome of Actions Continuous case: Discrete case: Given and action u, what will happen to the posterior Belief about state 16 11/16/09 8

Example: The Resulting Belief 17 General Bayes Filters: Framework Putting actions and observations together Given: Stream of observations z and action data u: Sensor model P(z x) (actual models discussed later) Action model P(x u,x ) (actual models discussed later) Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief: 18 11/16/09 9

Markov Assumption Underlying Assumptions Static world Independent noise Perfect model, no approximation errors 19 Bayes Filters z = observation u = action x = state Bayes Markov Total prob. = η P(z t x t ) P(x t u 1,z 1,K,u t,x t 1 ) P(x t 1 u 1,z 1,K,u t ) dx t 1 Markov Markov 20 11/16/09 10

Bayes Filter Algorithm 1. Algorithm Bayes_filter( Bel(x),d ): 2. η=0 3. If d is a perceptual data item z then 4. For all x do 5. 6. 7. For all x do 8. 9. Else if d is an action data item u then 10. For all x do 11. 12. Return Bel (x) 21 Bayes Filters are Familiar! Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs) 22 11/16/09 11

Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems. 23 11/16/09 12