Mobile Robot Localization

Size: px
Start display at page:

Download "Mobile Robot Localization"

Transcription

1 Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations depend probabilistically on robot pose - pose changes depend probabilistically on robot actions Eample: Uncertainty of robot position after travelling along red path ( shaded area indicates probability distribution) Slides on Robot Localization are partly adapted from Sebastian Thrun, Michael Beetz, 2 1

2 Formalization of Localization Problem m model of environment (e.g. map) s t pose at time t o t observation at time t a t action at time t d 0...t = o 0, a 0, o 1, a 1,..., o t, a t observation and action data up to t Task: Estimate p(s t d 0...t, m) = b t (s t ) "robot s belief state at time t" Markov properties: Current observation depends only on current pose Net pose depends only on current pose and current action "Future is independent of past given current state" Markov assumption implies static environment! (Violation, for eample, by robot actions changing the environment) 3 Structure of Probabilistic Localization map m observations o 1 o 2 o 3 o t poses S 1 S 2 S 3 S t actions a 1 a 2 a 3 a t-1 4 2

3 Recursive Markov Localization b t (s t ) = p(s t o 0, a 0, o 1, a 1,..., a t-1, o t, m) Bayes = α t p(o t o 0, a 0,..., a t-1, s t, m) p(s t o 0, a 0,..., a t-1, m) Markov = α t p(o t s t, m) p(s t o 0, a 0,..., a t-1, m) Total Prob. = α t p(o t s t, m) p(s t o 0, a 0,..., a t-1, s t-1, m) p(s t-1 o 0, a 0,..., a t-1, m) ds t-1 Markov = α t p(o t s t, m) p(s t a t-1, s t-1, m) p(s t-1 o 0, a 0,..., a t-1, m) ds t-1 = α t p(o t s t, m) p(s t a t-1, s t-1, m) b t-1 (s t-1 ) ds t-1 α t is normalizing factor b t (s t ) = α t p(o t s t, m) p(s t a t-1, s t-1, m) b t-1 (s t-1 ) ds t-1 p(o t s t, m) probabilistic perceptual model - often time-invariant: p(o s, m) p(s t a t-1, s t-1, m) probabilistic motion model - often time-invariant: p(s a, s, m) must be specified for a specific robot and a specific environment 5 Probabilistic Sensor Model for Laser Range Finder probability p(o s) epected distance sensor properties can often be approimated by Gaussian miture densities s measured distance o [cm] Adapted from: Sebastian Thrun, Probabilistic Algorithms in Robotics 6 3

4 Probabilistic Pose Model Possible poses are represented by a probability distribution ("belief") which is modified during the localization process initial distribution intermediate distribution final distribution 7 Grid-based Markov Localization (Eample 1) robot path with 4 reference poses, initially belief is equally distributed distribution of belief at second pose distribution of belief at third pose distribution of belief at fourth pose Ambiguous localizations due to a repetitive and symmetric environment are sharpened and disambiguated after several observations. 8 4

5 Grid-based Markov Localization (Eample 2) map and robot path maimum position probabilities after 6 steps maimum position probabilities after 12 steps [Burgard et al. 96] 9 Approimating Probabilistic Update by Monte Carlo Localization (MCL) "Importance Sampling" "Particle Filters" "Condensation Algorithm" different names for a method to approimate a probability density by discrete samples (see slide "Sampling Methods") Approimate implementation of belief update equation b t (s t ) = α t p(o t s t, m) p(s t a t-1, s t-1, m) b t-1 (s t-1 ) ds t-1 1. Draw a sample s t-1 from the current belief b t-1 (s t-1 ) with a likelihood given by the importance factors of the belief b t-1 (s t-1 ). 2. For this s t-1, guess a successor pose s t according to the distribution p(s t a t-1, s t-1, m). 3. Compute a preliminary importance factor p(o t s t, m) for this sample and assign it to the new sample representing b t (s t ). 4. Repeat Step 1 through 3 m times. Finally, normalize the importance factors in the new sample set b t (s t ) so that they add up to 1. MCL is very effective and can give good results with as few as 100 samples. 10 5

6 Simultaneous Localization and Mapping (SLAM) Typical problem for a mobile robot in an unknown environment: learn the environment ("mapping") keep track of position and orientation ("localization") "Chicken-and-egg" problem: robot needs knowledge of environment in order to interpret sensor readings for localization robot needs pose knowledge in order to interpret sensor readings for mapping Make the environment a multidimensional probabilistic variable! Eample: Model of environment is a probabilistic occupancy grid P ij = e ij ( i, y i ) is empty 1-e ij ( i, y i ) is occupied 11 Bayes Filter for SLAM Etend the localization approach to simultaneous mapping: b t (s t ) = α t p(o t s t, m) p(s t a t-1, s t-1, m) b t-1 (s t-1 ) ds t-1 b t (s t, m t ) = α t p(o t s t, m t ) p(s t, m t a t-1, s t-1, m t-1 ) b t-1 (s t-1, m t-1 ) ds t-1 dm t-1 Assuming a time-invariant map and map-independent motion: b t (s t, m) = α t p(o t s t, m) p(s t a t-1, s t-1 ) b t-1 (s t-1, m) ds t-1 b t (s t, m) is (N+3)-dimensional with N variables for m (N >> 1000) and 3 for s t => compleity problem Important approaches to cope with this compleity: Kalman filtering (Gaussian probabilities and linear updating) Estimating only the mode of the posterior, argma m b(m) Treating the robot path as "missing variables" in Epectation Maimization 12 6

7 Kalman Filter for SLAM Problems (1) Basic Kalman Filter assumptions: 1. Net-state function is linear with added Gaussian noise 2. Perceptual model is linear with added Gaussian noise 3. Initial uncertainty is Gaussian Ad 1) Net state in SLAM is pose s t and model m. m is assumed constant s t is non-linear in general, approimately linear in a first-degree Taylor series epansion ("Etended Kalman Filtering") Let t be the state variable (s t, m) and ε control Gaussian noise with covariance Σ control, then t = A t-1 + B a t-1 + ε control => p( t a t-1, t-1 ) Ad 2) Sensor measurements are usually nonlinear in robotics, with nonwhite Gaussian noise. Approimation by first-degree Taylor series and ε measure Gaussian noise with covariance Σ measure. o t = C t + ε measure => p(o t t ) 13 Kalman Filter for SLAM Problems (2) Bayes Filter equation b t (s t, m) = α t p(o t s t, m) p(s t, m t a t-1, s t-1, m) b t-1 (s t-1, m) ds t-1 can be rewritten using the standard Kalman Filter equations: " µ t-1 "= µ t-1 + B a t" Σ t-1 "= Σ t-1 + Σ control" K t "= Σ t-1 C T (CΣ t-1 C T + Σ measure ) -1" µ t "= µ t-1 + K t (o t-1 - Cµ t-1 )" Σ t "= (I - K t C) Σ t-1" Compare with slides on Kalman Filtering in "Bildverarbeitung". Kalman Filtering estimates the full posterior distribution for all poses (not only the maimum) Guaranteed convergence to true map and robot pose Gaussian sensor noise is a bad model for correspondence problems 14 7

8 Remember: Kalman Filters (1) A Kalman filter provides an iterative scheme for (i) predicting an event and (ii) incorporating new measurements. prediction measurement Assume a linear system with observations depending linearly on the system state, and white Gaussian noise disturbing the system evolution and the observations: k+1 = A k k + w k z k = H k k + v k What is the best estimate of k based on the previous estimate k-1 and the observation z k? k quantity of interest ("state") at time k A k model for evolution of k w k zero mean Gaussian noise with covariance Q k z k observations at time k H k relation of observations to state v k zero mean Gaussian noise with covariance R k Often, A k, Q k, H k and R k are constant. 15 Remember: Kalman Filters (2) The best a priori estimate of k before observing z k is k = A k-1 k-1 After observing z k, the a priori estimate is updated by k = k + K k (z k - H k k ) K k is Kalman gain matri. K k is determined to minimize the a posteriori variance P k of the error k - k. The minimizing K k is K k = P k H k T (H k P k H k T + R k ) -1 with P k = A k P k-1 A k T + Q k-1 and P k = (I - K k H k ) P k P k is covariance of error k - k before observation of z k. Iterative order of computations: 0 P 0 (1) k = A k-1 k-1 (2) P k = A k P T k-1 A k + Q k-1 (1) K k = P T k H k (H k P T k H k + R k ) -1 (2) k = k + K k (z k - H k k ) initialization (3) P k = (I - K k H k ) P k k := k

9 Eample: Underwater Sonic Mapping From: S.Williams, G. Dissanayake, and H.F. Durrant-Whyte. Towards terrain-aided navigation for underwater robotics. Advanced Robotics, 15(5), Kalman Filter map and pose estimation Figure shows: estimated path of underwater vehicle with ellipses indicating position uncertainty 14 landmarks obtained by sonar measurements with ellipses indicating uncertainty, 5 artificial landmarks, the rest other reflective objects additional dots for weak landmark hypotheses 17 Solving the Correspondence Problem Map obtained from raw sensory data of a cyclic environment (large hall of a museum) based on robot s odometry correspondence problem! Map obtained by EM algorithm: Iterative maimization of both robot path and model non-incremental procedure! 18 9

10 Mapping with Epectation Maimization Principle of EM mapping algorithm: Repeat until no more changes E-step: Estimate robot poses for given map M-step: Calculate most likely map given poses The algorithm computes the maimum of the epectation of the joint log likelihood of the data d t = {a 0, o 0,..., a t, o t } and the robot s path s t = {s 0,..., s t }.!! " "+# # $ = $%&!$' (! ) +,&-. / * 0) * 1! * ( )1! " "%!! " # $ 0/ * # $&!! " "+# # $ = $%&!$'! '& % ( ) % *!! " ( "# $ +, ) -./&0(1/ *) +!20,) % % % E-step: Compute the posterior of pose s based on m [i] and all data including t > τ : => different from incremental localization M-step: Maimize log p(o τ s τ, m) for all τ and all poses s t under the epectation calculated in the E-step 19 Mapping with Incremental Maimum-Likelihood Estimation Stepwise maimum-likelihood estimation of map and pose is inferior to Kalman Filtering and EM estimation, but less comple. Obtain series of maimum-likelihood maps and poses m 1 *, m 2 *,... s 1 *, s 2 *,... by maimizing the marginal likelihood: <m t *, s t *> = argma p(o t m t, s t ) p(m t, s t a t, s t-1 *, m t-1 *) m t, s t This equation follows from the Bayes Filter equation by assuming that map and pose at t-1 are known for certain. real-time computation possible unable to handle cycles 20 10

11 Eample of Incremental Maimum- Likelihood Mapping At every time step, the map is grown by finding the most likely continuation. Map estimates do not converge as robot completes cycle because of accumulated pose estimation error. Eamples from: Sebastian Thrun, Probabilistic Algorithms in Robotics 21 Maintaining a Pose Posterior Distribution Problems with cyclic environment can be overcome by maintaining not only the maimum-likelihood pose estimate at t-1 but also the uncertainty distribution using Bayes Filter: p(s t o t, a t ) = α p(o t s t ) p(s t a t-1, s t-1 ) p(s t-1 o t-1, a t-1 ) ds t-1 Last eample repeated, representing the pose posterior by particles. Uncertainty is transferred onto map, so major corrections remain possible

12 Estimating Probabilities from a Database Given a sufficiently large database with tupels a (1)... a (N) of an unknown distribution P(X), we can compute maimum likelihood estimates of all partial joint probabilities and hence of all conditional probabilities. X m1,..., X mk = subset of X 1,... X L with K L w a = number of tuples in database with X m1 =a m1,..., X mk =a mk N = total number of tuples Maimum likelihood estimate of P(X m1 =a m1,..., X mk =a mk ) is P (X m1 =a m1,..., X mk =a mk ) = w a / N If a priori information is available, it may be introduced via a bias m a : P (X m1 =a m1,..., X mk =a mk ) = (w a + m a ) / N Often m a = 1 is chosen for all tupels a to epress equal likelihoods in the case of an empty database. 23 Epectation Maimization (EM) Basic idea of EM-algorithm: Initially one has data with missing values e.g. unassigned cluster memberships distribution models e.g. rule to assign to nearest-distance cluster center Iterate the two steps: E-Step: Compute epected distribution parameters based on data (initially by random choice) M-Step: Maimise likelihood of missing values based on distribution parameters 24 12

13 Eample for EM (1) Problem: Fit 3 straight lines to data points, not knowing which point belongs to which line. Algorithm: A Select 3 random lines initially B Assign data points to each line by minimum distance criterion C Determine best-fitting straight line for assigned data points D Repeat B and C until no further changes occur (Eample by Anna Ergorova, FU Berlin) 25 Eample for EM (2) Iteration (after C) B) 26 13

14 K-Means Clustering Algorithm (1) Special case of EM Most popular clustering algorithm Searches for local minimum of sum of Euclidean sample distances to cluster centers Guaranteed convergence in a finite number of steps Requires initialisation of fied number of clusters K May converge to local minimum 27 K-Means Clustering Algorithm (2) Represent eemplars as points in N-dimensional feature space A Choose arbitrary initial cluster centers B Determine cluster assignments using distance measure C Move cluster centers to center-of-gravity of assigned eemplars D Repeat B and C until no further changes occur Feature 2 o o Final positions of cluster centers have been reached! Feature

15 Learning Mitures of Gaussians Determine Gaussian miture distribution with K multivariate Gaussians which best describes given data => unsupervised clustering 2 " 1 " Multivariate Gaussian miture distribution: p() = w i N(µ i, Σ i )" i=1.. K" with w i=1.. K" i = 1" A "Select w i, µ i and Σ i, i = 1.. K, at random (K is given)" B "For each datum j compute probability p ij that j was generated by N(µ i, Σ i ):" " p ij = w i N(µ i, Σ i ) " C "Compute new weights w i ', mean µ i ', and covariance Σ i ' by maimum likelihood estimation:"! " µ! "! = ' "! # "! " = # "$ $ " $ #!% & %! % D "Repeat B and C until no further changes occur" "! = " $ #!!% & % & % % 29 General Form of EM Algorithm Compute unknown distribution for data with hidden variables observed values of all samples Y variables with hidden values for all samples θ parameters for probabilistic model EM algorithm: "! " =!"#$!%! # ) &'( = )*%+!,-.'%+( = )*!, E-step: Computation of summation => Likelihood of "completed" data w.r.t. distribution p(y = y, θ) M-step: Maimization of epected likelihood w.r.t. parameters θ The computed parameters increase the likelihood of data and hidden values with each iteration The algorithm terminates in a local maimum 30 15

16 Epectation Maimization for Estimating Bayes Net with Hidden Variable Epectation step of EM: Use current (initial) probability estimates to compute probability P(a) for all attribute combinations a (including values for hidden variables). a* = [ *, X 2 =a m2, X 3 =a m3,... ] w a* a 1 = [X 1 =a 1, X 2 =a m2, X 3 =a m3,... ] w a* P(a 1 ) a 2 = [X 1 =a 2, X 2 =a m2, X 3 =a m3,... ] w a* P(a 2 ) a M = [X 1 =a M, X 2 =a m2, X 3 =a m3,... ] w a* P(a M ) missing value absolute frequency completed database Recommended reading: Borgelt & Kruse, Graphical Models, Wiley Eample for Epectation Maimization (1) (adapted from Borgelt & Kruse, Graphical Models, Wiley 2002) Given 4 binary probabilistic variables A, B, C, H with known dependency structure: H A B C Given also a database with tuples [ * A B C] where H is a missing attribute. H A B C w * T T T 14 * T T F 11 * T F T 20 * T F F 20 * F T T 5 * F T F 5 * F F T 11 * F F F 14 absolute frequencies of occurrence Estimate of the conditional probabilities of the Bayes Net nodes! 32 16

17 Eample for Epectation Maimization (2) Initial (random) probability assignments: H P(H) A H P(A H) B H P(B H) C H P(C H) T 0.3 T T 0.4 T T 0.7 T T 0.8 F 0.7 T F 0.6 T F 0.8 T F 0.5 F T 0.6 F T 0.3 F T 0.2 F F 0.4 F F 0.2 F F 0.5 With!"#$%&'&() =!"%$#)!!"'$#)!!"($#)!!"#)!"%$#)!!"'$#)!!"($#)!!"#) " one can complete the database: # H A B C w T T T T 1.27 T T T F 3.14 T T F T 2.93 T T F F 8.14 T F T T 0.92 T F T F 2.37 T F F T 3.06 T F F F 8.49 H A B C w F T T T F T T F 7.86 F T F T F T F F F F T T 4.08 F F T F 2.63 F F F T 7.94 F F F F Eample for Epectation Maimization (3) Based on the modified complete database, one computes the maimum likelihood estimates of the conditional probabilities of the Bayes Net. Eample: ()*+ ",)(- " *)., " /)(-!"# = $%& = $'! ()*+ ",)(- " *)., " /)(- " 0).* " *)+, ",)01 " /)-.! 0)2( This way one gets new probability assignments: H P(H) A H P(A H) B H P(B H) C H P(C H) T 0.3 T T 0.51 T T 0.25 T T 0.27 F 0.7 T F 0.71 T F 0.39 T F 0.60 F T 0.49 F T 0.75 F T 0.73 F F 0.29 F F 0.61 F F 0.40 This completes the first iteration. After ca. 700 iterations the modifications of the probabilities are less than The resulting values are H P(H) A H P(A H) B H P(B H) C H P(C H) T 0.5 T T 0.5 T T 0.2 T T 0.4 F 0.5 T F 0.8 T F 0.5 T F 0.6 F T 0.5 F T 0.8 F T 0.6 F F 0.2 F F 0.2 F F

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error

More information

State Space and Hidden Markov Models

State Space and Hidden Markov Models State Space and Hidden Markov Models Kunsch H.R. State Space and Hidden Markov Models. ETH- Zurich Zurich; Aliaksandr Hubin Oslo 2014 Contents 1. Introduction 2. Markov Chains 3. Hidden Markov and State

More information

Bayesian Methods / G.D. Hager S. Leonard

Bayesian Methods / G.D. Hager S. Leonard Bayesian Methods Recall Robot Localization Given Sensor readings z 1, z 2,, z t = z 1:t Known control inputs u 0, u 1, u t = u 0:t Known model t+1 t, u t ) with initial 1 u 0 ) Known map z t t ) Compute

More information

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Introduction SLAM asks the following question: Is it possible for an autonomous vehicle

More information

Lecture 3: Pattern Classification. Pattern classification

Lecture 3: Pattern Classification. Pattern classification EE E68: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mitures and

More information

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation

More information

AUTOMOTIVE ENVIRONMENT SENSORS

AUTOMOTIVE ENVIRONMENT SENSORS AUTOMOTIVE ENVIRONMENT SENSORS Lecture 5. Localization BME KÖZLEKEDÉSMÉRNÖKI ÉS JÁRMŰMÉRNÖKI KAR 32708-2/2017/INTFIN SZÁMÚ EMMI ÁLTAL TÁMOGATOTT TANANYAG Related concepts Concepts related to vehicles moving

More information

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017 EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:

More information

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

Robot Localisation. Henrik I. Christensen. January 12, 2007

Robot Localisation. Henrik I. Christensen. January 12, 2007 Robot Henrik I. Robotics and Intelligent Machines @ GT College of Computing Georgia Institute of Technology Atlanta, GA hic@cc.gatech.edu January 12, 2007 The Robot Structure Outline 1 2 3 4 Sum of 5 6

More information

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a

More information

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

MACHINE LEARNING ADVANCED MACHINE LEARNING

MACHINE LEARNING ADVANCED MACHINE LEARNING MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 2 2 MACHINE LEARNING Overview Definition pdf Definition joint, condition, marginal,

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

Introduction to Mobile Robotics Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Robotics Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Particle Filter for Localization Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, and the book Probabilistic Robotics from Thurn et al.

More information

Chapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager

Chapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager Chapter 9: Bayesian Methods CS 336/436 Gregory D. Hager A Simple Eample Why Not Just Use a Kalman Filter? Recall Robot Localization Given Sensor readings y 1, y 2, y = y 1: Known control inputs u 0, u

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Outline. Spring It Introduction Representation. Markov Random Field. Conclusion. Conditional Independence Inference: Variable elimination

Outline. Spring It Introduction Representation. Markov Random Field. Conclusion. Conditional Independence Inference: Variable elimination Probabilistic Graphical Models COMP 790-90 Seminar Spring 2011 The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Outline It Introduction ti Representation Bayesian network Conditional Independence Inference:

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state

More information

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models.  Domingo Gallardo Robots Autónomos 2. Bayesian Estimation and sensor models Domingo Gallardo Depto. CCIA http://www.rvg.ua.es/master/robots References Recursive State Estimation: Thrun, chapter 2 Sensor models and robot

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman Localization Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman Localization General robotic task Where am I? Techniques generalize to many estimation tasks System parameter

More information

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions. CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called

More information

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,

More information

Dynamic models 1 Kalman filters, linearization,

Dynamic models 1 Kalman filters, linearization, Koller & Friedman: Chapter 16 Jordan: Chapters 13, 15 Uri Lerner s Thesis: Chapters 3,9 Dynamic models 1 Kalman filters, linearization, Switching KFs, Assumed density filters Probabilistic Graphical Models

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture Notes Fall 2009 November, 2009 Byoung-Ta Zhang School of Computer Science and Engineering & Cognitive Science, Brain Science, and Bioinformatics Seoul National University

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Probabilistic Fundamentals in Robotics

Probabilistic Fundamentals in Robotics Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino June 2011 Course Outline Basic mathematical framework Probabilistic

More information

Computer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization

Computer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization Prof. Daniel Cremers 6. Mixture Models and Expectation-Maximization Motivation Often the introduction of latent (unobserved) random variables into a model can help to express complex (marginal) distributions

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robotic mapping Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

Simultaneous Localization and Mapping Techniques

Simultaneous Localization and Mapping Techniques Simultaneous Localization and Mapping Techniques Andrew Hogue Oral Exam: June 20, 2005 Contents 1 Introduction 1 1.1 Why is SLAM necessary?.................................. 2 1.2 A Brief History of SLAM...................................

More information

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the

More information

Gaussian Mixture Models, Expectation Maximization

Gaussian Mixture Models, Expectation Maximization Gaussian Mixture Models, Expectation Maximization Instructor: Jessica Wu Harvey Mudd College The instructor gratefully acknowledges Andrew Ng (Stanford), Andrew Moore (CMU), Eric Eaton (UPenn), David Kauchak

More information

MACHINE LEARNING ADVANCED MACHINE LEARNING

MACHINE LEARNING ADVANCED MACHINE LEARNING MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete

More information

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz What is SLAM? Estimate the pose of a robot and the map of the environment

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan Lecture 3: Latent Variables Models and Learning with the EM Algorithm Sam Roweis Tuesday July25, 2006 Machine Learning Summer School, Taiwan Latent Variable Models What to do when a variable z is always

More information

Machine Learning Lecture 2

Machine Learning Lecture 2 Machine Perceptual Learning and Sensory Summer Augmented 6 Computing Announcements Machine Learning Lecture 2 Course webpage http://www.vision.rwth-aachen.de/teaching/ Slides will be made available on

More information

Simultaneous Localization and Mapping

Simultaneous Localization and Mapping Simultaneous Localization and Mapping Miroslav Kulich Intelligent and Mobile Robotics Group Czech Institute of Informatics, Robotics and Cybernetics Czech Technical University in Prague Winter semester

More information

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering Cengiz Günay, Emory Univ. Günay Ch. 15,20 Hidden Markov Models and Particle FilteringSpring 2013 1 / 21 Get Rich Fast!

More information

Statistical learning. Chapter 20, Sections 1 4 1

Statistical learning. Chapter 20, Sections 1 4 1 Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Robot Mapping. Least Squares. Cyrill Stachniss

Robot Mapping. Least Squares. Cyrill Stachniss Robot Mapping Least Squares Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for computing a solution

More information

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms L03. PROBABILITY REVIEW II COVARIANCE PROJECTION NA568 Mobile Robotics: Methods & Algorithms Today s Agenda State Representation and Uncertainty Multivariate Gaussian Covariance Projection Probabilistic

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Recursive Bayes Filtering

Recursive Bayes Filtering Recursive Bayes Filtering CS485 Autonomous Robotics Amarda Shehu Fall 2013 Notes modified from Wolfram Burgard, University of Freiburg Physical Agents are Inherently Uncertain Uncertainty arises from four

More information

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make

More information

Expectation Maximization Deconvolution Algorithm

Expectation Maximization Deconvolution Algorithm Epectation Maimization Deconvolution Algorithm Miaomiao ZHANG March 30, 2011 Abstract In this paper, we use a general mathematical and eperimental methodology to analyze image deconvolution. The main procedure

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

CS491/691: Introduction to Aerial Robotics

CS491/691: Introduction to Aerial Robotics CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of

More information

Gaussian Mixture Models

Gaussian Mixture Models Gaussian Mixture Models Pradeep Ravikumar Co-instructor: Manuela Veloso Machine Learning 10-701 Some slides courtesy of Eric Xing, Carlos Guestrin (One) bad case for K- means Clusters may overlap Some

More information

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008 CS 88: Artificial Intelligence Fall 28 Lecture 9: HMMs /4/28 Announcements Midterm solutions up, submit regrade requests within a week Midterm course evaluation up on web, please fill out! Dan Klein UC

More information

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Hidden Markov Models Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Many slides courtesy of Dan Klein, Stuart Russell, or

More information

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Markov localization uses an explicit, discrete representation for the probability of all position in the state space. Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole

More information

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Networks BY: MOHAMAD ALSABBAGH Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional

More information

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras Introduction to Mobile Robotics Information Gain-Based Exploration Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Tasks of Mobile Robots mapping SLAM localization integrated

More information

Approximate Inference

Approximate Inference Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate

More information

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Probabilistic Map-based Localization (Kalman Filter) Chapter 5 (textbook) Based on textbook

More information

Notes on Discriminant Functions and Optimal Classification

Notes on Discriminant Functions and Optimal Classification Notes on Discriminant Functions and Optimal Classification Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Discriminant Functions Consider a classification problem

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Mixtures of Gaussians. Sargur Srihari

Mixtures of Gaussians. Sargur Srihari Mixtures of Gaussians Sargur srihari@cedar.buffalo.edu 1 9. Mixture Models and EM 0. Mixture Models Overview 1. K-Means Clustering 2. Mixtures of Gaussians 3. An Alternative View of EM 4. The EM Algorithm

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic

More information

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots Manipulators P obotics Configuration of robot specified by 6 numbers 6 degrees of freedom (DOF) 6 is the minimum number required to position end-effector arbitrarily. For dynamical systems, add velocity

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: Tony Jebara Topic 7 Unsupervised Learning Statistical Perspective Probability Models Discrete & Continuous: Gaussian, Bernoulli, Multinomial Maimum Likelihood Logistic

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis Lecture Recalls of probability theory Massimo Piccardi University of Technology, Sydney,

More information

Inference and estimation in probabilistic time series models

Inference and estimation in probabilistic time series models 1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence

More information

Bayesian Approach 2. CSC412 Probabilistic Learning & Reasoning

Bayesian Approach 2. CSC412 Probabilistic Learning & Reasoning CSC412 Probabilistic Learning & Reasoning Lecture 12: Bayesian Parameter Estimation February 27, 2006 Sam Roweis Bayesian Approach 2 The Bayesian programme (after Rev. Thomas Bayes) treats all unnown quantities

More information

Introduction to Mobile Robotics Probabilistic Sensor Models

Introduction to Mobile Robotics Probabilistic Sensor Models Introduction to Mobile Robotics Probabilistic Sensor Models Wolfram Burgard 1 Sensors for Mobile Robots Contact sensors: Bumpers Proprioceptive sensors Accelerometers (spring-mounted masses) Gyroscopes

More information

Latent Variable Models and Expectation Maximization

Latent Variable Models and Expectation Maximization Latent Variable Models and Expectation Maximization Oliver Schulte - CMPT 726 Bishop PRML Ch. 9 2 4 6 8 1 12 14 16 18 2 4 6 8 1 12 14 16 18 5 1 15 2 25 5 1 15 2 25 2 4 6 8 1 12 14 2 4 6 8 1 12 14 5 1 15

More information

Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS

Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Outline Maximum likelihood (ML) Priors, and

More information

Latent Variable Models and Expectation Maximization

Latent Variable Models and Expectation Maximization Latent Variable Models and Expectation Maximization Oliver Schulte - CMPT 726 Bishop PRML Ch. 9 2 4 6 8 1 12 14 16 18 2 4 6 8 1 12 14 16 18 5 1 15 2 25 5 1 15 2 25 2 4 6 8 1 12 14 2 4 6 8 1 12 14 5 1 15

More information

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech

More information

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 The Generative Model POV We think of the data as being generated from some process. We assume

More information

Mathematical Formulation of Our Example

Mathematical Formulation of Our Example Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Hidden Markov Models Instructor: Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, and Anca. http://ai.berkeley.edu.]

More information

STA 414/2104, Spring 2014, Practice Problem Set #1

STA 414/2104, Spring 2014, Practice Problem Set #1 STA 44/4, Spring 4, Practice Problem Set # Note: these problems are not for credit, and not to be handed in Question : Consider a classification problem in which there are two real-valued inputs, and,

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Expectation Maximization (EM) and Mixture Models Hamid R. Rabiee Jafar Muhammadi, Mohammad J. Hosseini Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2 Agenda Expectation-maximization

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

6.867 Machine learning

6.867 Machine learning 6.867 Machine learning Mid-term eam October 8, 6 ( points) Your name and MIT ID: .5.5 y.5 y.5 a).5.5 b).5.5.5.5 y.5 y.5 c).5.5 d).5.5 Figure : Plots of linear regression results with different types of

More information

Lecture 4: Probabilistic Learning

Lecture 4: Probabilistic Learning DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods

More information

Machine Learning Basics III

Machine Learning Basics III Machine Learning Basics III Benjamin Roth CIS LMU München Benjamin Roth (CIS LMU München) Machine Learning Basics III 1 / 62 Outline 1 Classification Logistic Regression 2 Gradient Based Optimization Gradient

More information