Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Size: px
Start display at page:

Download "Robotics. Mobile Robotics. Marc Toussaint U Stuttgart"

Transcription

1 Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart

2 DARPA Grand Challenge /48

3 DARPA Grand Urban Challenge /48

4 Quadcopter Indoor Localization 4/48

5 STAIR: STanford Artificial Intelligence Robot 5/48

6 Types of Robot Mobility 6/48

7 Types of Robot Mobility 7/48

8 Each type of robot mobility corresponds to a system equation x t+1 = x t + τf(x t, u t ) or, if the dynamics are stochastic, P (x t+1 u t, x t ) = N(x t+1 x t + τf(x t, u t ), Σ) We considered control, path finding, and trajectory optimization For this we always assumed to know the state x t of the robot (e.g., its posture/position)! 8/48

9 Outline PART I: A core challenge in mobile robotics is state estimation Bayesian filtering & smoothing particles, Kalman PART II: Another challenge is to build a map while exploring SLAM (simultaneous localization and mapping) 9/48

10 PART I: State Estimation Problem Our sensory data does not provide sufficient information to determine our location. Given the local sensor readings y t, the current state x t (location, position) is uncertain. which hallway? which door exactly? which heading direction? 10/48

11 State Estimation Problem What is the probability of being in front of room 154, given we see what is shown in the image? What is the probability given that we were just in front of room 156? What is the probability given that we were in front of room 156 and moved 15 meters? 11/48

12 Recall Bayes theorem P (X Y ) = P (Y X) P (X) P (Y ) posterior = likelihood prior (normalization) 12/48

13 How can we apply this to the State Estimation Problem? 13/48

14 How can we apply this to the State Estimation Problem? Using Bayes Rule: P (sensor location)p (location) P (location sensor) = P (sensor) 13/48

15 Bayes Filter x t = state (location) at time t y t = sensor readings at time t u t-1 = control command (action, steering, velocity) at time t-1 Given the history y 0:t and u 0:t-1, we want to compute the probability distribution over the state at time t p t (x t ) := P (x t y 0:t, u 0:t-1 ) Generally: Filtering: P (x t y 0:t) Smoothing: P (x t y 0:T ) y 0:t x t y 0:T x t Prediction: P (x t y 0:s) y 0:s x t 14/48

16 Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) 15/48

17 Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) using Bayes rule P (X Y, Z) = c P (Y X, Z) P (X Z) with some normalization constant c t 15/48

18 Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) uses conditional independence of the observation on past observations and controls 15/48

19 Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t, x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 by definition of the marginal 15/48

20 Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t, x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, y 0:t-1, u 0:t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 by definition of a conditional 15/48

21 Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t, x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, y 0:t-1, u 0:t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, u t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 given x t-1, x t depends only on the controls u t-1 (Markov Property) 15/48

22 Bayes Filter p t (x t ) := P (x t y 0:t, u 0:t-1 ) = c t P (y t x t, y 0:t-1, u 0:t-1 ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t y 0:t-1, u 0:t-1 ) = c t P (y t x t ) P (x t, x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, y 0:t-1, u 0:t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t x t-1, u t-1 ) P (x t-1 y 0:t-1, u 0:t-1 ) dx t-1 x t-1 = c t P (y t x t ) P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 x t-1 A Bayes filter updates the posterior belief p t (x t ) in each time step using the: observation model P (y t x t ) transition model P (x t u t-1, x t-1 ) 15/48

23 Bayes Filter p t (x t ) P (y t x t ) }{{} new information P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx x t-1 }{{} t-1 old estimate }{{} predictive estimate ˆp t(x t) 1. We have a belief p t-1 (x t-1 ) of our previous position 2. We use the motion model to predict the current position ˆp t (x t ) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 3. We integetrate this with the current observation to get a better belief p t (x t ) P (y t x t ) ˆp t (x t ) 16/48

24 Typical transition model P (x t u t-1, x t-1 ) in robotics: (from Robust Monte Carlo localization for mobile robots Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert) 17/48

25 Odometry ( Dead Reckoning ) The predictive distributions ˆp t (x t ) without integrating observations (removing the P (y t x t ) part from the Bayesian filter) (from Robust Monte Carlo localization for mobile robots Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert) 18/48

26 Again, predictive distributions ˆp t (x t ) without integrating landmark observations 19/48

27 The Bayes-filtered distributions p t (x t ) integrating landmark observations 20/48

28 Bayesian Filters How to represent the belief p t (x t ): Gaussian Particles 21/48

29 Recall: Particle Representation of a Distribution Weighed set of N particles {(x i, w i )} N i=1 p(x) q(x) := N i=1 wi δ(x, x i ) 22/48

30 Particle Filter := Bayesian Filtering with Particles (Bayes Filter: p t (x t ) P (y t x t ) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 ) 1. Start with N particles {(x i t-1, w i t-1)} N i=1 2. Resample particles to get N weight-1-particles: {ˆx i t-1} N i=1 3. Use motion model to get new predictive particles {x i t} N i=1 each x i t P (x t u t-1, ˆx i t-1) 4. Use observation model to assign new weights w i t P (y t x i t) 23/48

31 Particle Filter aka Monte Carlo Localization in the mobile robotics community Condensation Algorithm in the vision community Efficient resampling is important: Typically Residual Resampling : Instead of sampling directly ˆn i Multi({Nw i}) set ˆn i = Nw i + n i with n i Multi({Nw i Nw i }) Liu & Chen (1998): Sequential Monte Carlo Methods for Dynamic Systems. Douc, Cappé & Moulines: Comparison of Resampling Schemes for Particle Filtering. 24/48

32 Example: Quadcopter Localization Quadcopter Indoor Localization 25/48

33 Typical Pitfall in Particle Filtering Predicted particles {x i t} N i=1 have very low observation likelihood P (y t x i t) 0 ( particles die over time ) Classical solution: generate particles also with other than purely forward proposal P (x t u t-1, x t-1 ): Choose a proposal that depends on the new observation y t, ideally approximating P (x t y t, u t-1, x t-1 ) Or mix particles sampled directly from P (y t x t ) and from P (x t u t-1, x t-1 ). (Robust Monte Carlo localization for mobile robots. Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert) 26/48

34 Kalman filter := Bayesian Filtering with Gaussians Bayes Filter: p t (x t ) P (y t x t ) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 Can be computed analytically for linear-gaussian observations and transitions: P (y t x t ) = N(y t Cx t + c, W ) P (x t u t-1, x t-1 ) = N(x t A(u t-1 ) x t-1 + a(u t-1 ), Q) Defition: N(x a, A) = 1 2πA 1/2 exp{ 1 2 (x - a) A -1 (x - a)} Product: N(x a, A) N(x b, B) = N(x B(A+B) -1 a + A(A+B) -1 b, A(A+B) -1 B) N(a b, A + B) Propagation : y N(x a + F y, A) N(y b, B) dy = N(x a + F b, A + F BF ) Transformation: N(F x + f a, A) = 1 F N(x F -1 (a f), F -1 AF - ) (more identities: see Gaussian identities 27/48

35 Kalman filter derivation p t(x t) = N(x t s t, S t) P (y t x t) = N(y t Cx t + c, W ) P (x t u t-1, x t-1 ) = N(x t Ax t-1 + a, Q) p t(x t) P (y t x t) P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 x t-1 = N(y t Cx t + c, W ) N(x t Ax t-1 + a, Q) N(x t-1 s t-1, S t-1 ) dx t-1 x t-1 = N(y t Cx t + c, W ) N(x t As t-1 + a, Q + AS t-1a ) }{{}}{{} =:ŝ t =:Ŝt = N(Cx t + c y t, W ) N(x t ŝ t, Ŝt) = N[x t C W -1 (y t c), C W -1 C] N(x t ŝ t, Ŝt) = N(x t s t, S t) terms indep. of x t S t = (C W -1 C + Ŝ-1 t ) -1 = Ŝt ŜtC (W + CŜtC ) -1 }{{} CŜt Kalman gain K s t = S t[c W -1 (y t c) + Ŝ-1 t ŝt] = ŝt + K(yt Cŝt c) The second to last line uses the general Woodbury identity. The last line uses S tc W -1 = K and S t Ŝ -1 t = I KC 28/48

36 Extended Kalman filter (EKF) and Unscented Transform Bayes Filter: p t (x t ) P (y t x t ) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1 ) dx t-1 Can be computed analytically for linear-gaussian observations and transitions: P (y t x t ) = N(y t Cx t + c, W ) P (x t u t-1, x t-1 ) = N(x t A(u t-1 )x t-1 + a(u t-1 ), Q) If P (y t x t ) or P (x t u t-1, x t-1 ) are not linear: P (y t x t ) = N(y t g(x t ), W ) P (x t u t-1, x t-1 ) = N(x t f(x t-1, u t-1 ), Q) approximate f and g as locally linear (Extended Kalman Filter) or sample locally from them and reapproximate as Gaussian (Unscented Transform) 29/48

37 Bayes smoothing Filtering: P (x t y 0:t ) Smoothing: P (x t y 0:T ) y 0:t x t y 0:T x t Prediction: P (x t y 0:s ) y 0:s x t 30/48

38 Bayes smoothing Let P = y 0:t past observations, F = y t+1:t future observations P (x t P, F, u 0:T ) P (F x t, P, u 0:T ) P (x t P, u 0:T ) = P (F x t, u t:t ) }{{} =:β t(x t) P (x t P, u 0:t-1 ) }{{} =:p(x t) Bayesian smoothing fuses a forward filter p t (x t ) with a backward filter β t (x t ) 31/48

39 Bayes smoothing Let P = y 0:t past observations, F = y t+1:t future observations P (x t P, F, u 0:T ) P (F x t, P, u 0:T ) P (x t P, u 0:T ) = P (F x t, u t:t ) }{{} =:β t(x t) P (x t P, u 0:t-1 ) }{{} =:p(x t) Bayesian smoothing fuses a forward filter p t (x t ) with a backward filter β t (x t ) Backward recursion (derivation analogous to the Bayesian filter) β t (x t ) := P (y t+1:t x t, u t:t ) = β t+1 (x t+1 ) P (y t+1 x t+1 ) P (x t+1 x t, u t ) dx t+1 x t+1 31/48

40 PART II: Localization and Mapping The Bayesian filter requires an observation model P (y t x t ) A map is something that provides the observation model: A map tells us for each x t what the sensor readings y t might look like 32/48

41 Types of maps Grid map Landmark map K. Murphy (1999): Bayesian map learning in dynamic environments. Grisetti, Tipaldi, Stachniss, Burgard, Nardi: Fast and Accurate SLAM with Rao-Blackwellized Particle Filters Victoria Park data set Laser scan map M. Montemerlo, S. Thrun, D. Koller, & B. Wegbreit (2003): FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. IJCAI, /48

42 Simultaneous Localization and Mapping Problem Notation: x t = state (location) at time t y t = sensor readings at time t u t-1 = control command (action, steering, velocity) at time t-1 m = the map Given the history y 0:t and u 0:t-1, we want to compute the belief over the pose AND THE MAP m at time t p t (x t, m) := P (x t, m y 0:t, u 0:t-1 ) We assume to know: transition model P (x t u t-1, x t-1 ) observation model P (y t x t, m) 34/48

43 SLAM: classical chicken or egg problem If we knew the state trajectory x 0:t we could efficiently compute the belief over the map P (m x 0:t, y 0:t, u 0:t-1 ) If we knew the map we could use a Bayes filter to compute the belief over the state P (x t m, y 0:t, u 0:t-1 ) 35/48

44 SLAM: classical chicken or egg problem If we knew the state trajectory x 0:t we could efficiently compute the belief over the map P (m x 0:t, y 0:t, u 0:t-1 ) If we knew the map we could use a Bayes filter to compute the belief over the state P (x t m, y 0:t, u 0:t-1 ) SLAM requires to tie state estimation and map building together: 1) Joint inference on x t and m ( Kalman-SLAM) 2) Tie a state hypothesis (=particle) to a map hypothesis ( particle SLAM) 3) Frame everything as a graph optimization problem ( graph SLAM) 35/48

45 Joint Bayesian Filter over x and m A (formally) straight-forward approach is the joint Bayesian filter p t (x t, m) P (y t x t, m) x t-1 P (x t u t-1, x t-1 ) p t-1 (x t-1, m) dx t-1 But: How represent a belief over high-dimensional x t, m? 36/48

46 Map uncertainty In the case the map m = (θ 1,.., θ N ) is a set of N landmarks, θ j R 2 Use Gaussians to represent the uncertainty of landmark positions 37/48

47 (Extended) Kalman Filter SLAM Analogous to Localization with Gaussian for the pose belief p t (x t ) But now: joint belief p t (x t, θ 1:N ) is 3 + 2N-dimensional Gaussian Assumes the map m = (θ 1,.., θ N ) is a set of N landmarks, θ j R 2 Exact update equations (under the Gaussian assumption) Conceptually very simple Drawbacks: Scaling (full covariance matrix is O(N 2 )) Sometimes non-robust (uni-modal, data association problem ) Lacks advantages of Particle Filter (multiple hypothesis, more robust to non-linearities) 38/48

48 SLAM with particles Core idea: Each particle carries its own map belief 39/48

49 SLAM with particles Core idea: Each particle carries its own map belief Use a conditional representation p t (x t, m) = p t (x t ) p t (m x t ) (This notation is flaky... the below is more precise) As for the Localization Problem use particles to represent the pose belief p t (x t ) Note: Each particle actually has a history x i 0:t a whole trajectory! For each particle separately, estimate the map belief p i t(m) conditioned on the particle history x i 0:t. The conditional beliefs p i t(m) may be factorized over grid points or landmarks of the map K. Murphy (1999): Bayesian map learning in dynamic environments. 39/48

50 Map estimation for a given particle history Given x 0:t (e.g. a trajectory of a particle), what is the posterior over the map m? simplified Bayes Filter: p t (m) := P (m x 0:t, y 0:t ) P (y t m, x t ) p t-1 (m) (no transtion model: assumption that map is constant) In the case of landmarks (FastSLAM): m = (θ 1,.., θ N ) θ j = position of the jth landmark, j {1,.., N} n t = which landmark we observe at time t, n t {1,.., N} We can use a separate (Gaussian) Bayes Filter for each θ j conditioned on x 0:t, each θ j is independent from each θ k : P (θ 1:N x 0:t, y 0:n, n 0:t ) = j P (θ j x 0:t, y 0:n, n 0:t ) 40/48

51 Particle likelihood in SLAM Particle likelihood for Localization Problem: w i t = P (y t x i t) (determins the new importance weight w i t In SLAM the map is uncertain each particle is weighted with the expected likelihood: w i t = P (y t x i t, m) p t 1 (m) dm In case of landmarks (FastSLAM): w i t = P (y t x i t, θ nt, n t ) p t 1 (θ nt ) dθ nt Data association problem (actually we don t know n t ): For each particle separately choose n i t = argmax nt w i t(n t ) 41/48

52 Particle-based SLAM summary We have a set of N particles {(x i, w i )} N i=1 to represent the pose belief p t (x t ) For each particle we have a separate map belief p i t(m); in the case of landmarks, this factorizes in N separate 2D-Gaussians Iterate 1. Resample particles to get N weight-1-particles: {ˆx i t-1} N i=1 2. Use motion model to get new predictive particles {x i t} N i=1 3. Update the map belief p i m(m) P (y t m, x t ) p i t-1(m) for each particle 4. Compute new importance weights w i t P (y t x i t, m) p t 1 (m) dm using the observation model and the map belief 42/48

53 Demo: Visual SLAM Map building from a freely moving camera 43/48

54 Demo: Visual SLAM Map building from a freely moving camera SLAM has become a bit topic in the vision community.. features are typically landmarks θ 1:N with SURF/SIFT features PTAM (Parallel Tracking and Mapping) parallelizes computations... PTAM1 PTAM2 TODO: 11-DTAM-Davidson G Klein, D Murray: Parallel Tracking and Mapping for Small AR Workspaces 44/48

55 Alternative SLAM approach: Graph-based Represent the previous trajectory as a graph nodes = estimated positions & observations edges = transition & step estimation based on scan matching Loop Closing: check if some nodes might coincide new edges Classical Optimization: The whole graph defines an optimization problem: Find poses that minimize sum of edge & node errors 45/48

56 Loop Closing Problem (Doesn t explicitly exist in Particle Filter methods: If particles cover the belief, then data association solves the loop closing problem ) 46/48

57 Graph-based SLAM Life-long Map Learning for Graph-based SLAM Approaches in Static Environments Kretzschmar, Grisetti, Stachniss 47/48

58 SLAM code Graph-based and grid map methods: Visual SLAM e.g. 48/48

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Particle Filter for Localization Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, and the book Probabilistic Robotics from Thurn et al.

More information

Simultaneous Localization and Mapping

Simultaneous Localization and Mapping Simultaneous Localization and Mapping Miroslav Kulich Intelligent and Mobile Robotics Group Czech Institute of Informatics, Robotics and Cybernetics Czech Technical University in Prague Winter semester

More information

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz What is SLAM? Estimate the pose of a robot and the map of the environment

More information

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras Introduction to Mobile Robotics Information Gain-Based Exploration Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Tasks of Mobile Robots mapping SLAM localization integrated

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017 EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Probabilistic Fundamentals in Robotics

Probabilistic Fundamentals in Robotics Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino June 2011 Course Outline Basic mathematical framework Probabilistic

More information

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014 Machine Learning Probability Basics Basic definitions: Random variables, joint, conditional, marginal distribution, Bayes theorem & examples; Probability distributions: Binomial, Beta, Multinomial, Dirichlet,

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Probabilistic Robotics

Probabilistic Robotics University of Rome La Sapienza Master in Artificial Intelligence and Robotics Probabilistic Robotics Prof. Giorgio Grisetti Course web site: http://www.dis.uniroma1.it/~grisetti/teaching/probabilistic_ro

More information

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Robotics. Lecture 4: Probabilistic Robotics. See course website   for up to date information. Robotics Lecture 4: Probabilistic Robotics See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Sensors

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Robot Localisation. Henrik I. Christensen. January 12, 2007

Robot Localisation. Henrik I. Christensen. January 12, 2007 Robot Henrik I. Robotics and Intelligent Machines @ GT College of Computing Georgia Institute of Technology Atlanta, GA hic@cc.gatech.edu January 12, 2007 The Robot Structure Outline 1 2 3 4 Sum of 5 6

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robotic mapping Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic

More information

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Part 1: Expectation Propagation

Part 1: Expectation Propagation Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Introduction SLAM asks the following question: Is it possible for an autonomous vehicle

More information

Chapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager

Chapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager Chapter 9: Bayesian Methods CS 336/436 Gregory D. Hager A Simple Eample Why Not Just Use a Kalman Filter? Recall Robot Localization Given Sensor readings y 1, y 2, y = y 1: Known control inputs u 0, u

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

Robot Localization and Kalman Filters

Robot Localization and Kalman Filters Robot Localization and Kalman Filters Rudy Negenborn rudy@negenborn.net August 26, 2003 Outline Robot Localization Probabilistic Localization Kalman Filters Kalman Localization Kalman Localization with

More information

Information Gain-based Exploration Using Rao-Blackwellized Particle Filters

Information Gain-based Exploration Using Rao-Blackwellized Particle Filters Information Gain-based Exploration Using Rao-Blackwellized Particle Filters Cyrill Stachniss Giorgio Grisetti Wolfram Burgard University of Freiburg, Department of Computer Science, D-79110 Freiburg, Germany

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Hidden Markov Models Instructor: Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, and Anca. http://ai.berkeley.edu.]

More information

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters

SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters Matthew Walter 1,2, Franz Hover 1, & John Leonard 1,2 Massachusetts Institute of Technology 1 Department of Mechanical Engineering

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions. CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called

More information

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Probabilistic Map-based Localization (Kalman Filter) Chapter 5 (textbook) Based on textbook

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

Recursive Bayes Filtering

Recursive Bayes Filtering Recursive Bayes Filtering CS485 Autonomous Robotics Amarda Shehu Fall 2013 Notes modified from Wolfram Burgard, University of Freiburg Physical Agents are Inherently Uncertain Uncertainty arises from four

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Probabilities Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: AI systems need to reason about what they know, or not know. Uncertainty may have so many sources:

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object

More information

CS491/691: Introduction to Aerial Robotics

CS491/691: Introduction to Aerial Robotics CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007 Particle Filtering a brief introductory tutorial Frank Wood Gatsby, August 2007 Problem: Target Tracking A ballistic projectile has been launched in our direction and may or may not land near enough to

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

AUTOMOTIVE ENVIRONMENT SENSORS

AUTOMOTIVE ENVIRONMENT SENSORS AUTOMOTIVE ENVIRONMENT SENSORS Lecture 5. Localization BME KÖZLEKEDÉSMÉRNÖKI ÉS JÁRMŰMÉRNÖKI KAR 32708-2/2017/INTFIN SZÁMÚ EMMI ÁLTAL TÁMOGATOTT TANANYAG Related concepts Concepts related to vehicles moving

More information

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman Localization Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman Localization General robotic task Where am I? Techniques generalize to many estimation tasks System parameter

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications

More information

E190Q Lecture 10 Autonomous Robot Navigation

E190Q Lecture 10 Autonomous Robot Navigation E190Q Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2015 1 Figures courtesy of Siegwart & Nourbakhsh Kilobots 2 https://www.youtube.com/watch?v=2ialuwgafd0 Control Structures

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

Self Adaptive Particle Filter

Self Adaptive Particle Filter Self Adaptive Particle Filter Alvaro Soto Pontificia Universidad Catolica de Chile Department of Computer Science Vicuna Mackenna 4860 (143), Santiago 22, Chile asoto@ing.puc.cl Abstract The particle filter

More information

Partially Observable Markov Decision Processes (POMDPs)

Partially Observable Markov Decision Processes (POMDPs) Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Introduction to Mobile Robotics Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Robotics Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Using the Kalman Filter for SLAM AIMS 2015

Using the Kalman Filter for SLAM AIMS 2015 Using the Kalman Filter for SLAM AIMS 2015 Contents Trivial Kinematics Rapid sweep over localisation and mapping (components of SLAM) Basic EKF Feature Based SLAM Feature types and representations Implementation

More information

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Rao-Blackwellized Particle Filter for Multiple Target Tracking Rao-Blackwellized Particle Filter for Multiple Target Tracking Simo Särkkä, Aki Vehtari, Jouko Lampinen Helsinki University of Technology, Finland Abstract In this article we propose a new Rao-Blackwellized

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors GRASP Laboratory University of Pennsylvania June 6, 06 Outline Motivation Motivation 3 Problem

More information

Bayesian Methods / G.D. Hager S. Leonard

Bayesian Methods / G.D. Hager S. Leonard Bayesian Methods Recall Robot Localization Given Sensor readings z 1, z 2,, z t = z 1:t Known control inputs u 0, u 1, u t = u 0:t Known model t+1 t, u t ) with initial 1 u 0 ) Known map z t t ) Compute

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important

More information

Markov chain Monte Carlo methods for visual tracking

Markov chain Monte Carlo methods for visual tracking Markov chain Monte Carlo methods for visual tracking Ray Luo rluo@cory.eecs.berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720

More information

Graphical Models and Kernel Methods

Graphical Models and Kernel Methods Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

Expectation propagation for signal detection in flat-fading channels

Expectation propagation for signal detection in flat-fading channels Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA

More information

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Myungsoo Jun and Raffaello D Andrea Sibley School of Mechanical and Aerospace Engineering Cornell University

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 18: HMMs and Particle Filtering 4/4/2011 Pieter Abbeel --- UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

(W: 12:05-1:50, 50-N201)

(W: 12:05-1:50, 50-N201) 2015 School of Information Technology and Electrical Engineering at the University of Queensland Schedule Week Date Lecture (W: 12:05-1:50, 50-N201) 1 29-Jul Introduction Representing Position & Orientation

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

Data Fusion Kalman Filtering Self Localization

Data Fusion Kalman Filtering Self Localization Data Fusion Kalman Filtering Self Localization Armando Jorge Sousa http://www.fe.up.pt/asousa asousa@fe.up.pt Faculty of Engineering, University of Porto, Portugal Department of Electrical and Computer

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.

More information

Approximate Inference

Approximate Inference Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate

More information

Square Root Unscented Particle Filtering for Grid Mapping

Square Root Unscented Particle Filtering for Grid Mapping Square Root Unscented Particle Filtering for Grid Mapping Technical Report 2009/246 Simone Zandara and Ann E. Nicholson Clayton School of Information Technology Monash University Clayton, Victoria Australia.

More information

Particle lter for mobile robot tracking and localisation

Particle lter for mobile robot tracking and localisation Particle lter for mobile robot tracking and localisation Tinne De Laet K.U.Leuven, Dept. Werktuigkunde 19 oktober 2005 Particle lter 1 Overview Goal Introduction Particle lter Simulations Particle lter

More information

Introduction to Mobile Robotics Bayes Filter Kalman Filter

Introduction to Mobile Robotics Bayes Filter Kalman Filter Introduction to Mobile Robotics Bayes Filter Kalman Filter Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Bayes Filter Reminder 1. Algorithm Bayes_filter( Bel(x),d ):

More information

Hidden Markov Models (recap BNs)

Hidden Markov Models (recap BNs) Probabilistic reasoning over time - Hidden Markov Models (recap BNs) Applied artificial intelligence (EDA132) Lecture 10 2016-02-17 Elin A. Topp Material based on course book, chapter 15 1 A robot s view

More information

Results: MCMC Dancers, q=10, n=500

Results: MCMC Dancers, q=10, n=500 Motivation Sampling Methods for Bayesian Inference How to track many INTERACTING targets? A Tutorial Frank Dellaert Results: MCMC Dancers, q=10, n=500 1 Probabilistic Topological Maps Results Real-Time

More information

Answers and expectations

Answers and expectations Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

Cooperative relative robot localization with audible acoustic sensing

Cooperative relative robot localization with audible acoustic sensing Cooperative relative robot localization with audible acoustic sensing Yuanqing Lin, Paul Vernaza, Jihun Ham, and Daniel D. Lee GRASP Laboratory, Department of Electrical and Systems Engineering, University

More information