Probabilistic Robotics

Size: px
Start display at page:

Download "Probabilistic Robotics"

Transcription

1 University of Rome La Sapienza Master in Artificial Intelligence and Robotics Probabilistic Robotics Prof. Giorgio Grisetti Course web site: Giorgio Grisetti Dynamic Bayesian Networks 1 / 32

2 Overview Probabilistic Dynamic Systems Dynamic Bayesian Networks (DBN) Inference on DBN Recursive Bayes Equation Giorgio Grisetti Dynamic Bayesian Networks 2 / 32

3 Dynamic System, Deterministic View u t 1 x t 1 x t f(x t 1,u t 1 ) h(x t ) z t t f(x t 1,u t 1 ): transition function h(x t ): observation function x t 1 : previous state x t : current state u t 1 : previous control/action z t : current observation t : delay Giorgio Grisetti Dynamic Bayesian Networks 3 / 32

4 Dynamic System, Probabilistic View u t 1 x t 1 x t p(x t x t 1,u t 1 ) p(z t x t ) z t t p(x t x t 1,u t 1 ): transition model p(z t x t ): observation model x t 1 : previous state x t : current state u t 1 : previous control/action z t : current observation t : delay Giorgio Grisetti Dynamic Bayesian Networks 4 / 32

5 Evolution of a Dynamic System x 0 Let s start from a known initial state distribution p(x 0 ). Giorgio Grisetti Dynamic Bayesian Networks 5 / 32

6 Evolution of a Dynamic System u 0 x 0 A control u 0 becomes available Giorgio Grisetti Dynamic Bayesian Networks 6 / 32

7 Evolution of a Dynamic System u 0 x 0 x 1 The transition model p(x t x t 1,u t 1 ) correlates the current states x 1 with the previous control u 0 and the previous state x 0. Giorgio Grisetti Dynamic Bayesian Networks 7 / 32

8 Evolution of a Dynamic System u 0 x 0 x 1 z 1 The observation model p(z t x t ) correlates the observation z 1 and the current state x 1. Giorgio Grisetti Dynamic Bayesian Networks 8 / 32

9 Evolution of a Dynamic System u 0 u 1 u t 1 u t x 0 x 1 x t 1 x t x T z 1 z t 1 z t z T This leads to a recurrent structure, that depends on the time. Giorgio Grisetti Dynamic Bayesian Networks 9 / 32

10 Dynamic Bayesian Networks (DBN) u 0 u 1 u t 1 u t x 0 x 1 x t 1 x t x T z 1 z t 1 z t z T Graphical representations of stochastic dynamic processes Characterized by a recurrent structure Giorgio Grisetti Dynamic Bayesian Networks 10 / 32

11 States in a DBN The domain of the states x t, the controls u t and the observations z t are not restricted to be boolean or discrete. Examples: Robot localization, with laser range finder states x t SE(2), isometries on a plane observations z t R #beams, laser range measurements controls u t R 2, translational and rotational speed HMM states x t [X 1,...,X Nx ], finite states observations z t [Z 1,...,Z Nz ], finite observations controls u t [U 1,...,U Nu ], finite observations Inference in a DBN requires to design a data structure that can represent a distribution over states. Giorgio Grisetti Dynamic Bayesian Networks 11 / 32

12 Typical Inferences in a DBN In a dynamic system, usually 1 we know: the observations made by the system z 1:T, because we measure them the controls u 1:T 1 because we issue them Typical inferences in a DBN: name query known Filtering p(x T u 0:T 1,z 1:T ) u 0:T 1,z 1:t Smoothing p(x k u 0:T 1,z 1:T ), 0 < k < T u 0:T 1,z 1:t Max a Posteriori argmax x0:t p(x 0:T u 0:T 1,z 1:T ) u 0:T 1,z 1:T 1 usually does not mean always Giorgio Grisetti Dynamic Bayesian Networks 12 / 32

13 Typical Inferences in a DBN Using the traditional tools for Bayes Networks is not a good idea. too many variables (potentially infinite) render the solution intractable the domains are not necessarily discrete However, we can exploit the recurrent structure to design procedures that take advantage of it. Giorgio Grisetti Dynamic Bayesian Networks 13 / 32

14 DBN Inference: Filtering u 0 u 1 u t 1 u t known query x 0 x 1 x t 1 x t x T z 1 z t 1 z t z T Given the sequence of all observations z 1:T up to the current time T, the sequence of all controls u 0:T 1 we want to compute the distrubution over the current states p(x T u 0:T 1,z 1:T ). Giorgio Grisetti Dynamic Bayesian Networks 14 / 32

15 DBN Inference: Smoothing u 0 u 1 u k 1 u k known query x 0 x 1 x k 1 x k x T z 1 z k 1 z k z T Given the sequence of all observations z 1:T up to the current time T, the sequence of all controls u 0:T 1 we want to compute the distrubution over a past state p(x k u 0:T 1,z 1:T ). Knowing also the controls u k:t 1 and the observations Z k:t after time k, leads to more accurate estimates than pure filtering. Giorgio Grisetti Dynamic Bayesian Networks 15 / 32

16 DBN Inference: Maximum a Posteriori u 0 u 1 u t 1 u t known query x 0 x 1 x t 1 x t x T z 1 z t 1 z t z T Given the sequence of all observations z 1:T up to the current time T, the sequence of all controls u 0:T 1 we want to find the most likely trajectory of states x 0:T. In this case we are not seeking for a distribution. Just the most likely sequence. Giorgio Grisetti Dynamic Bayesian Networks 16 / 32

17 DBN inference: Belief Algorithms for performing inference on a DBN keep track of the estimate of a distribution of states. This distribution should be stored in an appropriate data structure. The structure depends on the knowledge of the characteristics of the distribution (e.g. Gaussian) the domain of the state variables (e.g. continuous vs discrete) When we write b(x t ) we mean our current belief of p(x t...). The algorithms for performing inference on a DBN work by updating a belief. Giorgio Grisetti Dynamic Bayesian Networks 17 / 32

18 DBN inference: Belief In the simple case of a system with discrete state x {X 1:n }, the belief can be represented through and array x of float values. Each cell of the array x[i] = p(x = X i ) contains the probability of that state. If our system has a continous state and we know it is distributed according to a Gaussian, we can represent the belief through its parameters (mean and covariance matrix). If the state is continuous but the distribution is unknown, we can use some approximate representation (e.g. weighed samples of state values). Giorgio Grisetti Dynamic Bayesian Networks 18 / 32

19 Filtering: Bayes Recursion We want to compute p(x T u 0:T 1,z 1:T ) =? We know: the observations z 1:T the controls u 0:T 1 p(x t x t 1,u t 1 ): the transition model. It is a function that given the previous state x t 1 and control u t 1, tells us how likely it is to lend in state x t. p(z t x t ): the transition model. It is a function, that given the current state x t 1, tells us how likely it is to observe z t. b(x t 1 ), which is our previous belief about p(x t 1, u 0:t 2,z 1:t 1 ) Giorgio Grisetti Dynamic Bayesian Networks 19 / 32

20 Filtering (1) splitting z t : = p( x t }{{} A p(x t u 0:t 1,z 1:t ) = (1),u }{{} 0:t 1,z 1:t 1 ) (2) }{{} B C z t recall the conditional bayes rule p(a B,C) = p(b A,C)p(A C) p(b C) B A C A C = p( {}}{{}}{{}}{{}}{{}}{ z t x t, u 0:t 1,z 1:t 1 )p( x t u 0:t 1,z 1:t 1 ) p( z }{{} t u 0:t 1,z 1:t 1 ) }{{} B C (3) Giorgio Grisetti Dynamic Bayesian Networks 20 / 32

21 Filtering: Denominator Let the denominator η t = 1/p(z t u 0:t 1,z 1:t 1 ). (4) Note that η t does not depend on x, thus to the extent of our computation is just a normalizing constant. We will come back to the denominator later. Giorgio Grisetti Dynamic Bayesian Networks 21 / 32

22 Filtering: Observation model Our filtering equation becomes η t p(z t x t,u 0:t 1,z 1:t 1 )p(x t u 0:t 1,z 1:t 1 ) (5) Note that p(z t x t,u 0:t 1,z 1:t 1 ) means this known u 0 u 1 u t 1 query x 0 x 1 x t 1 x t z 1 z t 1 z t If we know x t, we do not need to know u 0:t 1,z 1:t 1 to predict z t, since the state x t encodes all the knowledge about the past (Markov assumption) : p(z t x t,u 0:t 1,z 1:t 1 ) = p(z t x t ) (6) Giorgio Grisetti Dynamic Bayesian Networks 22 / 32

23 Filtering: Transition Model Thus, our current equation is p(x t u 0:t 1,z 1:t ) = η t p(z t x t )p(x t u 0:t 1,z 1:t 1 ) (7) Still the second part of the equation is obscure. Our task is to manipulate it to get something that matches our preconditions. Giorgio Grisetti Dynamic Bayesian Networks 23 / 32

24 Filtering: Transition Model If we would know x t 1, our life would be nuch easier, as we could repeat the trick done for the observation model: u 0 u 1 u t 1 known query x 0 x 1 x t 1 x t z 1 z t 1 z t thus p(x t x t 1,u 0:t 1,z 1:t 1 ) = p(x t x t 1,u t 1 ) (8) Giorgio Grisetti Dynamic Bayesian Networks 24 / 32

25 Filtering: Transition Model The sad truth is that we do not have x t 1, however Recall the following probability identities p(a C) = b p(a, B C) (9) combining the two above we have p(a,b C) = p(a B,C)p(B C) (10) p(a C) = b p(a B, C)p(B C) (11) Giorgio Grisetti Dynamic Bayesian Networks 25 / 32

26 Filtering: Transition Model The sad truth is that we do not have x t 1, however let s look again at our problematic equation, and put some letters p( x }{{} t u 0:t 1,z 1:t 1 ) = (12) }{{} A C = x t x }{{} t 1,u }{{} 0:t 1,z 1:t 1 )p(x }{{} t 1 u }{{} 0:t 1,z 1:t 1 ) (13) }{{} x t 1p( A B C B C putting in the result of Eq. 8, we highlight the transition model = x t 1p(x t x t 1,u t 1 )p(x t 1 u 0:t 1,z 1:t 1 ) (14) Giorgio Grisetti Dynamic Bayesian Networks 26 / 32

27 Filtering: Wrapup After our efforts, we figure out that the recursive filtering equation is the following: p(x t u 0:t 1,z 1:t ) = (15) η t p(z t x t ) x t 1p(x t x t 1,u t 1 )p(x t 1 u 0:t 1,z 1:t 1 ) (16) yet, if in the last term of the product in the summation, we would not have a dependancy from u t 1, we would have a recursive equation. luckily, p(x t 1 u 0:t 1,z 1:t 1 ) = p(x t 1 u 0:t 2,z 1:t 1 ), since the last control has no influence on x t 1 if we don t know x t. Giorgio Grisetti Dynamic Bayesian Networks 27 / 32

28 Filtering: Wrapup We can finally write the recursive equation of filtering as: b(x t) {}}{ p(x t u 0:t 1,z 1:t ) (17) = η t p(z t x t ) x t 1p(x t x t 1,u t 1 )p(x t 1 u 0:t 2,z 1:t 1 ) }{{} b(x t 1 ) during the estimation, we do not have the true distribution, but rather beliefs estimate. Equation 18 tells us how to update a current belief once new observations/controls become available b(x t ) = η t p(z t x t ) x t 1p(x t x t 1,u t 1 )b(x t 1 ) (18) Giorgio Grisetti Dynamic Bayesian Networks 28 / 32

29 Normalizer: η t η t is just a constant ensuring that b(x t ) is still a probability distribution. 1 η t = x t p(x t x t 1,u t 1 )b(x t 1 ) Giorgio Grisetti Dynamic Bayesian Networks 29 / 32

30 Filtering: Discrete case float transition model ( int to, int from, int control ); float observation model ( int observarion, int state ); void f i l t e r ( float b, int n states, int z, int u) { // clear the states float b pred [ n states ]; for ( int i =0; i<n states ; i++) b pred [ i ]=0; } // predict for ( int i =0; i<n states ; i++) for ( int j=0; j<n states ; j++) b pred [ i]+=transition model (i, j,u) b[ j ]; // integrate the observation float inverse eta = 0; for ( int i =0; i<n states ; i++) inverse eta += b pred [ i] =observation model (z, i ); // normalize float eta = 1./ inverse eta ; for ( int i =0; i<n states ; i++) b[ i ] = b pred [ i ] eta ; Giorgio Grisetti Dynamic Bayesian Networks 30 / 32

31 Filtering: Alternative Formulation Predict: incorporate in the last belief b t 1, the most recent observation. From transition model and the last state, compute the following joint distribution through chain rule p(x t,x t 1 z 1:t 1,u 1:t 1 ) = p(x t x t 1,u t 1 )p(x t,x t 1 z 1:t 1,u 1:t 2 ) }{{} b t 1 (19) From the joint, remove x t 1 through marginalization p(x t z 1:t 1,u 1:t 1 ) = p(x }{{} t,x t 1 z 1:t 1,u 1:t 1 ) (20) x b t 1 t t 1 Giorgio Grisetti Dynamic Bayesian Networks 31 / 32

32 Filtering: Alternative Formulation Update: from the predicted belief b t t 1, compute the joint distribution that predicts the observation. From transition model and the last state, compute the following joint distribution through chain rule p(x t,z t z 1:t 1,u 1:t 1 ) = p(z t x t )p(x t, z 1:t 1,u 1:t 1 ) (21) Incorporate the current observation through conditioning, on the actual measurement p(x t,z t z 1:t 1,u 1:t 1 ) p(x t z 1:t,u 1:t 1 ) = }{{} p(z t z 1:t 1,u 1:t 1 ) b t t (22) Note: since we already know the value of z t, we do not need to compute the joint distribution for all possible values of z Z, but just for the current measurement. Giorgio Grisetti Dynamic Bayesian Networks 32 / 32

2-Step Temporal Bayesian Networks (2TBN): Filtering, Smoothing, and Beyond Technical Report: TRCIM1030

2-Step Temporal Bayesian Networks (2TBN): Filtering, Smoothing, and Beyond Technical Report: TRCIM1030 2-Step Temporal Bayesian Networks (2TBN): Filtering, Smoothing, and Beyond Technical Report: TRCIM1030 Anqi Xu anqixu(at)cim(dot)mcgill(dot)ca School of Computer Science, McGill University, Montreal, Canada,

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates

More information

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Networks BY: MOHAMAD ALSABBAGH Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional

More information

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012 CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline

More information

CSE 473: Artificial Intelligence Spring 2014

CSE 473: Artificial Intelligence Spring 2014 CSE 473: Artificial Intelligence Spring 2014 Hidden Markov Models Hanna Hajishirzi Many slides adapted from Dan Weld, Pieter Abbeel, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline

More information

CS532, Winter 2010 Hidden Markov Models

CS532, Winter 2010 Hidden Markov Models CS532, Winter 2010 Hidden Markov Models Dr. Alan Fern, afern@eecs.oregonstate.edu March 8, 2010 1 Hidden Markov Models The world is dynamic and evolves over time. An intelligent agent in such a world needs

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic

More information

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro

More information

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas Hidden Markov Models Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1

More information

Our Status in CSE 5522

Our Status in CSE 5522 Our Status in CSE 5522 We re done with Part I Search and Planning! Part II: Probabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more!

More information

Mathematical Formulation of Our Example

Mathematical Formulation of Our Example Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Particle Filters and Applications of HMMs Instructor: Wei Xu Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley.] Recap: Reasoning

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Particle Filters and Applications of HMMs Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials

More information

Introduction to Mobile Robotics Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Robotics Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action

More information

Rapid Introduction to Machine Learning/ Deep Learning

Rapid Introduction to Machine Learning/ Deep Learning Rapid Introduction to Machine Learning/ Deep Learning Hyeong In Choi Seoul National University 1/32 Lecture 5a Bayesian network April 14, 2016 2/32 Table of contents 1 1. Objectives of Lecture 5a 2 2.Bayesian

More information

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning CS188 Outline We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more! Part III:

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro

More information

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline CSE 473: Artificial Intelligence Probability Review à Markov Models Daniel Weld University of Washington [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions. CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called

More information

Probabilistic Partial Evaluation: Exploiting rule structure in probabilistic inference

Probabilistic Partial Evaluation: Exploiting rule structure in probabilistic inference Probabilistic Partial Evaluation: Exploiting rule structure in probabilistic inference David Poole University of British Columbia 1 Overview Belief Networks Variable Elimination Algorithm Parent Contexts

More information

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1 Hidden Markov Models AIMA Chapter 15, Sections 1 5 AIMA Chapter 15, Sections 1 5 1 Consider a target tracking problem Time and uncertainty X t = set of unobservable state variables at time t e.g., Position

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Probabilistic Graphical Models and Bayesian Networks. Artificial Intelligence Bert Huang Virginia Tech

Probabilistic Graphical Models and Bayesian Networks. Artificial Intelligence Bert Huang Virginia Tech Probabilistic Graphical Models and Bayesian Networks Artificial Intelligence Bert Huang Virginia Tech Concept Map for Segment Probabilistic Graphical Models Probabilistic Time Series Models Particle Filters

More information

Hidden Markov Models,99,100! Markov, here I come!

Hidden Markov Models,99,100! Markov, here I come! Hidden Markov Models,99,100! Markov, here I come! 16.410/413 Principles of Autonomy and Decision-Making Pedro Santana (psantana@mit.edu) October 7 th, 2015. Based on material by Brian Williams and Emilio

More information

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models.  Domingo Gallardo Robots Autónomos 2. Bayesian Estimation and sensor models Domingo Gallardo Depto. CCIA http://www.rvg.ua.es/master/robots References Recursive State Estimation: Thrun, chapter 2 Sensor models and robot

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Probability Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg Temporal Reasoning Kai Arras, University of Freiburg 1 Temporal Reasoning Contents Introduction Temporal Reasoning Hidden Markov Models Linear Dynamical Systems (LDS) Kalman Filter 2 Temporal Reasoning

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 18: HMMs and Particle Filtering 4/4/2011 Pieter Abbeel --- UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore

More information

Quantifying uncertainty & Bayesian networks

Quantifying uncertainty & Bayesian networks Quantifying uncertainty & Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the

More information

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Markov localization uses an explicit, discrete representation for the probability of all position in the state space. Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Hidden Markov Models Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

Intelligent Systems (AI-2)

Intelligent Systems (AI-2) Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 11 Oct, 3, 2016 CPSC 422, Lecture 11 Slide 1 422 big picture: Where are we? Query Planning Deterministic Logics First Order Logics Ontologies

More information

13: Variational inference II

13: Variational inference II 10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational

More information

Inference and estimation in probabilistic time series models

Inference and estimation in probabilistic time series models 1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

Hidden Markov Models Part 2: Algorithms

Hidden Markov Models Part 2: Algorithms Hidden Markov Models Part 2: Algorithms CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Hidden Markov Model An HMM consists of:

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Homework 3 out tonight Start early!! Announcements Project milestones due today Please email to TAs 2 Parameter learning

More information

CS 188: Artificial Intelligence. Our Status in CS188

CS 188: Artificial Intelligence. Our Status in CS188 CS 188: Artificial Intelligence Probability Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. 1 Our Status in CS188 We re done with Part I Search and Planning! Part II: Probabilistic Reasoning

More information

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Probability. CS 3793/5233 Artificial Intelligence Probability 1 CS 3793/5233 Artificial Intelligence 1 Motivation Motivation Random Variables Semantics Dice Example Joint Dist. Ex. Axioms Agents don t have complete knowledge about the world. Agents need to make decisions

More information

Intelligent Systems (AI-2)

Intelligent Systems (AI-2) Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 23, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,

More information

Probabilistic Classification

Probabilistic Classification Bayesian Networks Probabilistic Classification Goal: Gather Labeled Training Data Build/Learn a Probability Model Use the model to infer class labels for unlabeled data points Example: Spam Filtering...

More information

Statistical learning. Chapter 20, Sections 1 4 1

Statistical learning. Chapter 20, Sections 1 4 1 Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Bayesian Approaches Data Mining Selected Technique

Bayesian Approaches Data Mining Selected Technique Bayesian Approaches Data Mining Selected Technique Henry Xiao xiao@cs.queensu.ca School of Computing Queen s University Henry Xiao CISC 873 Data Mining p. 1/17 Probabilistic Bases Review the fundamentals

More information

CS491/691: Introduction to Aerial Robotics

CS491/691: Introduction to Aerial Robotics CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

PROBABILISTIC REASONING OVER TIME

PROBABILISTIC REASONING OVER TIME PROBABILISTIC REASONING OVER TIME In which we try to interpret the present, understand the past, and perhaps predict the future, even when very little is crystal clear. Outline Time and uncertainty Inference:

More information

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to Bayesian Learning. Machine Learning Fall 2018 Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Markov Models and Hidden Markov Models

Markov Models and Hidden Markov Models Markov Models and Hidden Markov Models Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA Markov Models We have already seen that an MDP provides

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Overview of probability, Representing uncertainty Propagation of uncertainty, Bayes Rule Application to Localization and Mapping Slides from Autonomous Robots (Siegwart and Nourbaksh),

More information

Lecture 16 Deep Neural Generative Models

Lecture 16 Deep Neural Generative Models Lecture 16 Deep Neural Generative Models CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 22, 2017 Approach so far: We have considered simple models and then constructed

More information

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Hidden Markov Models Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Many slides courtesy of Dan Klein, Stuart Russell, or

More information

Mobile Robotics II: Simultaneous localization and mapping

Mobile Robotics II: Simultaneous localization and mapping Mobile Robotics II: Simultaneous localization and mapping Introduction: probability theory, estimation Miroslav Kulich Intelligent and Mobile Robotics Group Gerstner Laboratory for Intelligent Decision

More information

CS 4649/7649 Robot Intelligence: Planning

CS 4649/7649 Robot Intelligence: Planning CS 4649/7649 Robot Intelligence: Planning Probability Primer Sungmoon Joo School of Interactive Computing College of Computing Georgia Institute of Technology S. Joo (sungmoon.joo@cc.gatech.edu) 1 *Slides

More information

Note Set 5: Hidden Markov Models

Note Set 5: Hidden Markov Models Note Set 5: Hidden Markov Models Probabilistic Learning: Theory and Algorithms, CS 274A, Winter 2016 1 Hidden Markov Models (HMMs) 1.1 Introduction Consider observed data vectors x t that are d-dimensional

More information

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010 Hidden Lecture 4: Hidden : An Introduction to Dynamic Decision Making November 11, 2010 Special Meeting 1/26 Markov Model Hidden When a dynamical system is probabilistic it may be determined by the transition

More information

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Probabilistic Reasoning. (Mostly using Bayesian Networks) Probabilistic Reasoning (Mostly using Bayesian Networks) Introduction: Why probabilistic reasoning? The world is not deterministic. (Usually because information is limited.) Ways of coping with uncertainty

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Wei Xu Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley.] Pacman Sonar (P4) [Demo: Pacman Sonar

More information

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example CS 88: Artificial Intelligence Fall 29 Lecture 9: Hidden Markov Models /3/29 Announcements Written 3 is up! Due on /2 (i.e. under two weeks) Project 4 up very soon! Due on /9 (i.e. a little over two weeks)

More information

Reasoning Under Uncertainty Over Time. CS 486/686: Introduction to Artificial Intelligence

Reasoning Under Uncertainty Over Time. CS 486/686: Introduction to Artificial Intelligence Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence 1 Outline Reasoning under uncertainty over time Hidden Markov Models Dynamic Bayes Nets 2 Introduction So far we

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008 CS 88: Artificial Intelligence Fall 28 Lecture 9: HMMs /4/28 Announcements Midterm solutions up, submit regrade requests within a week Midterm course evaluation up on web, please fill out! Dan Klein UC

More information

Inference in Bayesian Networks

Inference in Bayesian Networks Andrea Passerini passerini@disi.unitn.it Machine Learning Inference in graphical models Description Assume we have evidence e on the state of a subset of variables E in the model (i.e. Bayesian Network)

More information

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models $ Technical Report, University of Toronto, CSRG-501, October 2004 Density Propagation for Continuous Temporal Chains Generative and Discriminative Models Cristian Sminchisescu and Allan Jepson Department

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

9 Forward-backward algorithm, sum-product on factor graphs

9 Forward-backward algorithm, sum-product on factor graphs Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous

More information

CSC321 Lecture 18: Learning Probabilistic Models

CSC321 Lecture 18: Learning Probabilistic Models CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling

More information

Intelligent Systems (AI-2)

Intelligent Systems (AI-2) Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 24, 2016 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,

More information

15-780: Grad AI Lecture 19: Graphical models, Monte Carlo methods. Geoff Gordon (this lecture) Tuomas Sandholm TAs Erik Zawadzki, Abe Othman

15-780: Grad AI Lecture 19: Graphical models, Monte Carlo methods. Geoff Gordon (this lecture) Tuomas Sandholm TAs Erik Zawadzki, Abe Othman 15-780: Grad AI Lecture 19: Graphical models, Monte Carlo methods Geoff Gordon (this lecture) Tuomas Sandholm TAs Erik Zawadzki, Abe Othman Admin Reminder: midterm March 29 Reminder: project milestone

More information

Conditional Independence

Conditional Independence Conditional Independence Sargur Srihari srihari@cedar.buffalo.edu 1 Conditional Independence Topics 1. What is Conditional Independence? Factorization of probability distribution into marginals 2. Why

More information

Final Exam December 12, 2017

Final Exam December 12, 2017 Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Hidden Markov Models Instructor: Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, and Anca. http://ai.berkeley.edu.]

More information

Our Status. We re done with Part I Search and Planning!

Our Status. We re done with Part I Search and Planning! Probability [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Our Status We re done with Part

More information

Probabilistic Graphical Networks: Definitions and Basic Results

Probabilistic Graphical Networks: Definitions and Basic Results This document gives a cursory overview of Probabilistic Graphical Networks. The material has been gleaned from different sources. I make no claim to original authorship of this material. Bayesian Graphical

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Introduction to Mobile Robotics Bayes Filter Kalman Filter

Introduction to Mobile Robotics Bayes Filter Kalman Filter Introduction to Mobile Robotics Bayes Filter Kalman Filter Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Bayes Filter Reminder 1. Algorithm Bayes_filter( Bel(x),d ):

More information

Hidden Markov Models (recap BNs)

Hidden Markov Models (recap BNs) Probabilistic reasoning over time - Hidden Markov Models (recap BNs) Applied artificial intelligence (EDA132) Lecture 10 2016-02-17 Elin A. Topp Material based on course book, chapter 15 1 A robot s view

More information

15-381: Artificial Intelligence. Hidden Markov Models (HMMs)

15-381: Artificial Intelligence. Hidden Markov Models (HMMs) 15-381: Artificial Intelligence Hidden Markov Models (HMMs) What s wrong with Bayesian networks Bayesian networks are very useful for modeling joint distributions But they have their limitations: - Cannot

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering Cengiz Günay, Emory Univ. Günay Ch. 15,20 Hidden Markov Models and Particle FilteringSpring 2013 1 / 21 Get Rich Fast!

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

Markov Chains and Hidden Markov Models

Markov Chains and Hidden Markov Models Markov Chains and Hidden Markov Models CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Slides are based on Klein and Abdeel, CS188, UC Berkeley. Reasoning

More information

Introduction to Artificial Intelligence (AI)

Introduction to Artificial Intelligence (AI) Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 9 Oct, 11, 2011 Slide credit Approx. Inference : S. Thrun, P, Norvig, D. Klein CPSC 502, Lecture 9 Slide 1 Today Oct 11 Bayesian

More information

Conditional probabilities and graphical models

Conditional probabilities and graphical models Conditional probabilities and graphical models Thomas Mailund Bioinformatics Research Centre (BiRC), Aarhus University Probability theory allows us to describe uncertainty in the processes we model within

More information

Final Exam December 12, 2017

Final Exam December 12, 2017 Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes

More information

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2 Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ Bayesian paradigm Consistent use of probability theory

More information

Introduction to Artificial Intelligence (AI)

Introduction to Artificial Intelligence (AI) Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 10 Oct, 13, 2011 CPSC 502, Lecture 10 Slide 1 Today Oct 13 Inference in HMMs More on Robot Localization CPSC 502, Lecture

More information

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams. Course Introduction Probabilistic Modelling and Reasoning Chris Williams School of Informatics, University of Edinburgh September 2008 Welcome Administration Handout Books Assignments Tutorials Course

More information

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic CS 188: Artificial Intelligence Fall 2008 Lecture 16: Bayes Nets III 10/23/2008 Announcements Midterms graded, up on glookup, back Tuesday W4 also graded, back in sections / box Past homeworks in return

More information

Probability and Information Theory. Sargur N. Srihari

Probability and Information Theory. Sargur N. Srihari Probability and Information Theory Sargur N. srihari@cedar.buffalo.edu 1 Topics in Probability and Information Theory Overview 1. Why Probability? 2. Random Variables 3. Probability Distributions 4. Marginal

More information