Recursive Bayes Filtering
|
|
- Madison Wilkins
- 5 years ago
- Views:
Transcription
1 Recursive Bayes Filtering CS485 Autonomous Robotics Amarda Shehu Fall 2013 Notes modified from Wolfram Burgard, University of Freiburg
2 Physical Agents are Inherently Uncertain Uncertainty arises from four major factors: Environment stochastic, unpredictable Robot stochastic Sensor limited, noisy Models inaccurate
3 Example: Museum Tour-Guide Robots Rhino, 1997 Minerva, 1998
4 Technical Challenges Navigation Environment crowded, unpredictable Environment unmodified Invisible hazards Walking speed or faster High failure costs Interaction Individuals and crowds Museum visitors first encounter Age 2 through 99 Spend less than 15 minutes
5 Nature of Sensor Data Odometry Data Range Data
6 Nature of Sensor Data Odometry Data Range Data
7 Probabilistic Techniques for Physical Agents Key idea: Explicit representation of uncertainty using the calculus of probability theory Perception = state estimation Action = utility optimization
8 Advantages of Probabilistic Paradigm Can accommodate inaccurate models Can accommodate imperfect sensors Robust in real-world applications Best known approach to many hard robotics problems
9 Pitfalls Computationally demanding False assumptions Approximate
10 Outline Introduction Probabilistic State Estimation Robot Localization Probabilistic Decision Making Planning Between MDPs and POMDPs Exploration Conclusions
11 Axioms of Probability Theory Pr(A) denotes probability that proposition A is true. 0 Pr( A) 1 Pr(True) = 1 Pr( A B ) = Pr( A) + Pr( B ) Pr( A B ) Pr( False) = 0
12 A Closer Look at Axiom 3 Pr( A B ) = Pr( A) + Pr( B ) Pr( A B ) True A A B B B
13 Using the Axioms Pr( A A) = Pr( A) + Pr( A) Pr( A A) Pr(True) 1 Pr( A) = = = Pr( A) + Pr( A) Pr( False) Pr( A) + Pr( A) 0 1 Pr( A)
14 Discrete Random Variables X denotes a random variable. X can take on a finite number of values in {x1, x2,, xn}. P(X=xi), or P(xi), is the probability that the random variable X takes on value xi.
15 Continuous Random Variables X takes on values in the continuum. p(x=x), or p(x), is a probability density function. b Pr( x [a, b]) = p ( x)dx a p(x) E.g. x
16 Joint and Conditional Probability P(X=x and Y=y) = P(x,y) If X and Y are independent then P(x,y) = P(x) P(y) P(x y) is the probability of x given y P(x y) = P(x,y) / P(y) P(x,y) = P(x y) P(y) If X and Y are independent then P(x y) = P(x)
17 Law of Total Probability, Marginals Discrete case Continuous case P( x) = 1 p( x) dx = 1 P ( x ) = P ( x, y ) p ( x) = p ( x, y ) dy P( x) = P( x y ) P( y ) p ( x) = p ( x y ) p ( y ) dy x y y
18 Bayes Formula P ( x, y ) = P ( x y ) P ( y ) = P ( y x ) P ( x ) P ( y x) P ( x) likelihood prior P( x y ) = = P( y ) evidence
19 Normalization P( y x) P( x) P( x y ) = = η P ( y x) P( x) P( y ) 1 1 η = P( y) = P( y x)p( x) x Algorithm: x : aux x y = P( y x) P( x) 1 η= aux x y x x : P ( x y ) = η aux x y
20 Conditioning Total probability: P ( x y ) = P ( x y, z ) P ( z y ) dz Bayes rule and background knowledge: P ( y x, z ) P ( x z ) P( x y, z ) = P( y z )
21 Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open z)?
22 Causal vs. Diagnostic Reasoning P(open z) is diagnostic. P(z open) is causal. Often causal knowledge is easier to obtain. Bayes rule allows us to use causal knowledge: P ( z open) P (open) P (open z ) = P( z )
23 Causal vs. Diagnostic Reasoning P(open z) is diagnostic. P(z open) is causal. Often causal knowledge is easier to obtain. count frequencies! Bayes rule allows us to use causal knowledge: P ( z open) P (open) P (open z ) = P( z )
24 Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 P(open z) =?
25 Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 P( z open) P (open) P (open z ) = P ( z open) p (open) + P( z open) p ( open) P (open z ) = = = z raises the probability that the door is open.
26 Combining Evidence Suppose our robot obtains another observation z2. How can we integrate this new information? More generally, how can we estimate P(x z1...zn )?
27 Recursive Bayesian Updating P ( zn x, z1,, zn 1) P ( x z1,, zn 1) P ( x z1,, zn) = P ( zn z1,, zn 1)
28 Recursive Bayesian Updating P ( zn x, z1,, zn 1) P ( x z1,, zn 1) P ( x z1,, zn) = P ( zn z1,, zn 1) Markov assumption: zn is independent of z1,...,zn-1 if we know x.
29 Recursive Bayesian Updating P ( zn x, z1,, zn 1) P ( x z1,, zn 1) P ( x z1,, zn) = P ( zn z1,, zn 1) Markov assumption: zn is independent of z1,...,zn-1 if we know x. P ( zn x) P( x z1,, zn 1) P ( x z1,, zn) = P ( zn z1,, zn 1) = η P ( zn x) P ( x z1,, zn 1) = η1...n P( z x) P( x) i i =1...n
30 Example: Second Measurement P(z2 open) = 0.5 P(open z1)=2/3 P(open z2, z1) =? P(z2 open) = 0.6
31 Example: Second Measurement P(z2 open) = 0.5 P(open z1)=2/3 P(z2 open) = 0.6 P ( z 2 open) P(open z1 ) P(open z 2, z1 ) = P( z 2 open) P(open z1 ) + P( z 2 open) P( open z1 ) = = = z2 lowers the probability that the door is open.
32 Actions Often the world is dynamic since actions carried out by the robot, actions carried out by other agents, or just the time passing by change the world. How can we incorporate such actions?
33 Typical Actions The robot turns its wheels to move The robot uses its manipulator to grasp an object Plants grow over time Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty.
34 Modeling Actions To incorporate the outcome of an action u into the current belief, we use the conditional pdf P(x u,x ) This term specifies the pdf that executing u changes the state from x to x.
35 Example: Closing the door
36 State Transitions P(x u,x ) for u = close door : open closed 0 If the door is open, the action close door succeeds in 90% of all cases. 1
37 Integrating the Outcome of Actions Continuous case: P( x u)= P ( x u,x' ) P( x' )dx' Discrete case: P( x u )= P ( x u,x' )P ( x' )
38 Example: The Resulting Belief P( closed u )= P(closed u,x' )P ( x' ) =P(closed u,open )P (open ) +P(closed u,closed )P (closed ) = + = P( open u)= P (open u,x' ) P( x' ) =P(open u,open) P( open) +P(open u,closed )P( closed ) = + = =1 P (closed u )
39 Bayes Filters: Framework Given: Stream of observations z and action data u: d t = {u1, z 2, ut 1, zt } Sensor model P(z x). Action model P(x u,x ). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief: Bel ( xt ) = P( xt u1, z 2, ut 1, zt )
40 Markov Assumption Zt-1 Xt-1 Zt+1 Zt ut-1 Xt ut Xt+1 p(d t, d t 1,..., d 0 xt, d t + 1, d t + 2,...) = p(d t, d t 1,..., d 0 xt ) p(d t, d t + 1,... xt, d1, d 2,..., d t 1 ) = p(d t, d t + 1,... xt ) p( x t ut 1,x t 1,d t 2,...,d 0 )=p ( x t u t 1,x t 1 ) Underlying Assumptions Static world Independent noise Perfect model, no approximation errors
41 Bayes Filters Bel ( xt ) = P( xt u1, z 2, ut 1, zt ) z = observation u = action x = state
42 Bayes Filters z = observation u = action x = state Bel ( xt ) = P( xt u1, z 2, ut 1, zt ) Bayes = η P ( zt xt, u1, z 2,, ut 1 ) P( xt u1, z 2,, ut 1 )
43 Bayes Filters z = observation u = action x = state Bel ( xt ) = P( xt u1, z 2, ut 1, zt ) Bayes = η P ( zt xt, u1, z 2,, ut 1 ) P( xt u1, z 2,, ut 1 ) Markov = η P( zt xt ) P ( xt u1, z 2,, ut 1 )
44 Bayes Filters z = observation u = action x = state Bel ( xt ) = P( xt u1, z 2, ut 1, zt ) Bayes = η P ( zt xt, u1, z 2,, ut 1 ) P( xt u1, z 2,, ut 1 ) Markov = η P( zt xt ) P ( xt u1, z 2,, ut 1 ) Total prob. = η P ( zt xt ) P ( xt u1, z 2,, ut 1, xt 1 ) P( xt 1 u1, z 2,, ut 1 ) dxt 1
45 Bayes Filters z = observation u = action x = state Bel ( xt ) = P( xt u1, z 2, ut 1, zt ) Bayes = η P ( zt xt, u1, z 2,, ut 1 ) P ( xt u1, z 2,, ut 1 ) Markov = η P ( zt xt ) P( xt u1, z 2,, ut 1 ) Total prob. Markov = η P ( zt xt ) P( xt u1, z 2,, ut 1, xt 1 ) P ( xt 1 u1, z 2,, ut 1 ) dxt 1 = η P ( zt xt ) P( xt ut 1, xt 1 ) P ( xt 1 u1, z 2,, ut 1 ) dxt 1
46 Bayes Filters z = observation u = action x = state Bel ( xt ) = P( xt u1, z 2, ut 1, zt ) Bayes = η P ( zt xt, u1, z 2,, ut 1 ) P ( xt u1, z 2,, ut 1 ) Markov = η P ( zt xt ) P( xt u1, z 2,, ut 1 ) Total prob. Markov = η P ( zt xt ) P( xt u1, z 2,, ut 1, xt 1 ) P ( xt 1 u1, z 2,, ut 1 ) dxt 1 = η P ( zt xt ) P( xt ut 1, xt 1 ) P ( xt 1 u1, z 2,, ut 1 ) dxt 1 = η P ( zt xt ) P( xt ut 1, xt 1 ) Bel ( xt 1 ) dxt 1
47 Bayes Filters z = observation u = action x = state Bel ( xt ) = P( xt u1, z 2, ut 1, zt ) Bayes = η P ( zt xt, u1, z 2,, ut 1 ) P ( xt u1, z 2,, ut 1 ) Markov = η P ( zt xt ) P( xt u1, z 2,, ut 1 ) Total prob. Markov = η P ( zt xt ) P( xt u1, z 2,, ut 1, xt 1 ) P ( xt 1 u1, z 2,, ut 1 ) dxt 1 = η P ( zt xt ) P( xt ut 1, xt 1 ) P ( xt 1 u1, z 2,, ut 1 ) dxt 1 = η P ( zt xt ) P( xt ut 1, xt 1 ) Bel ( xt 1 ) dxt 1
48 Bayes Filter Algorithm Algorithm Bayes_filter( Bel(x),d ): η=0 if d is a perceptual data item z then For all x do Bel ' ( x) = P ( z x) Bel ( x) η = η + Bel ' ( x) For all x do Bel ' ( x) = η 1 Bel ' ( x) else if d is an action data item u then For all x do Bel ' ( x) = P( x u, x' ) Bel ( x' ) dx' return Bel (x) Bel ( xt ) = η P ( zt xt ) P ( xt ut 1, xt 1 ) Bel ( xt 1 ) dxt 1
49 Bayes Filters are Familiar! Bel ( xt ) = η P ( zt xt ) P ( xt ut 1, xt 1 ) Bel ( xt 1 ) dxt 1 Kalman filters Particle filters Hidden Markov models Dynamic Bayes networks Partially Observable Markov Decision Processes (POMDPs)
50 Application to Door State Estimation Estimate the opening angle of a door and the state of other dynamic objects using a laser-range finder from a moving mobile robot and based on Bayes filters.
51 Result
52 Lessons Learned Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.
53 Tutorial Outline Introduction Probabilistic State Estimation Localization
54 The Localization Problem Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities. [Cox 91] Given Wanted Map of the environment. Sequence of sensor measurements. Estimate of the robot s position. Problem classes Position tracking Global localization Kidnapped robot problem (recovery)
55 Representations for Bayesian Robot Localization Kalman filters (late-80s?) Discrete approaches ( 95) Topological representation ( 95) uncertainty handling (POMDPs) occas. global localization, recovery Grid-based, metric representation ( 96) global localization, recovery Particle filters ( 99) sample-based representation global localization, recovery AI Gaussians approximately linear models position tracking Robotics Multi-hypothesis ( 00) multiple Kalman filters global localization, recovery
56 Localization with Bayes Filters bel ( xt ) = η p(st xt ) p( xt xt 1, at 1 ) bel ( xt 1 ) dxt 1 bel ( xt m) = η p( st xt, m) p( xt xt 1, at 1, m) bel ( xt 1 m) dxt 1
57 Localization with Bayes Filters bel ( xt ) = η p(st xt ) p( xt xt 1, at 1 ) bel ( xt 1 ) dxt 1 bel ( xt m) = η p( st xt, m) p( xt xt 1, at 1, m) bel ( xt 1 m) dxt 1 map m
58 Localization with Bayes Filters bel ( xt ) = η p(st xt ) p( xt xt 1, at 1 ) bel ( xt 1 ) dxt 1 bel ( xt m) = η p( st xt, m) p( xt xt 1, at 1, m) bel ( xt 1 m) dxt 1 p(s a,s,m) s a s a
59 Localization with Bayes Filters bel ( xt ) = η p(st xt ) p( xt xt 1, at 1 ) bel ( xt 1 ) dxt 1 bel ( xt m) = η p( st xt, m) p( xt xt 1, at 1, m) bel ( xt 1 m) dxt 1 p(s a,s,m) s a s a observation o laser data map m p(o s,m) p(o s,m)
60 Localization with Bayes Filters bel ( xt ) = η p(st xt ) p( xt xt 1, at 1 ) bel ( xt 1 ) dxt 1 bel ( xt m) = η p( st xt, m) p( xt xt 1, at 1, m) bel ( xt 1 m) dxt 1 laser data p(o s,m)
61 Localization with Bayes Filters bel ( xt ) = η p(st xt ) p( xt xt 1, at 1 ) bel ( xt 1 ) dxt 1 bel ( xt m) = η p( st xt, m) p( xt xt 1, at 1, m) bel ( xt 1 m) dxt 1 ( xt ) = p( x x Motion: bel Perception: bel ( xt ) = η p( st xt ) bel t t 1, at 1 ) bel ( xt 1 ) dxt 1 ( xt ) is optimal under the Markov assumption
62 What is the Right Representation? Kalman filters Multi-hypothesis tracking Grid-based representations Topological approaches Particle filters
63 Gaussians p ( x) ~ N ( µ, σ 2 ) : µ 1 ( x µ ) σ2 1 p ( x) = e 2 2π σ 2 Univariate σ -σ p (x) ~ Ν (μ,σ) : p ( x) = 1 (2π ) d /2 Σ 1/ 2 e Multivariate 1 ( x μ ) t Σ 1 ( x μ ) 2 µ
64 Kalman Filters Estimate the state of processes that are governed by the following linear stochastic difference equation. x t+1 =Ax t +But +v t z t =Cx t +w t The random variables vt and wt represent the process measurement noise and are assumed to be independent, white and with normal probability distributions.
65 Kalman Filters [Schiele et al. 94], [Weiß et al. 94], [Borenstein 96], [Gutmann et al. 96, 98], [Arras 98]
66 Kalman Filters [Schiele et al. 94], [Weiß et al. 94], [Borenstein 96], [Gutmann et al. 96, 98], [Arras 98]
67 Kalman Filters [Schiele et al. 94], [Weiß et al. 94], [Borenstein 96], [Gutmann et al. 96, 98], [Arras 98]
68 Kalman Filters [Schiele et al. 94], [Weiß et al. 94], [Borenstein 96], [Gutmann et al. 96, 98], [Arras 98]
69 Kalman Filter Algorithm Algorithm Kalman_filter( <µ,σ>, d ): If d is a perceptual data item z then K = ΣC ( CΣC + Σ obs ) µ = µ + K ( z Cµ ) Σ = ( I KC )Σ T T 1 Else if d is an action data item u then µ = Aµ + Bu Σ = AΣAT + Σ act Return <µ,σ>
70 Non-linear Systems Very strong assumptions: Linear state dynamics Observations linear in state What can we do if system is not linear? Linearize it: EKF Compute the Jacobians of the dynamics and observations at the current state. Extended Kalman filter works surprisingly well even for highly non-linear systems.
71 Kalman Filter-based Systems (1) [Gutmann et al. 96, 98]: Match LRF scans against map Highly successful in RoboCup mid-size league Courtesy of S. Gutmann
72 Kalman Filter-based Systems (2) Courtesy of S. Gutmann
73 Kalman Filter-based Systems (3) [Arras et al. 98]: Laser range-finder and vision High precision (<1cm accuracy) Courtesy of K. Arras
74 Localization Algorithms - Comparison Kalman filter Sensors Gaussian Posterior Gaussian Efficiency (memory) ++ Efficiency (time) ++ Implementation + Accuracy ++ Robustness - Global localization No
75 Multihypothesis Tracking [Cox 92], [Jensfelt, Kristensen 99]
76 Localization With MHT Belief is represented by multiple hypotheses Each hypothesis is tracked by a Kalman filter Additional problems: Data association: Which observation corresponds to which hypothesis? Hypothesis management: When to add / delete hypotheses? Huge body of literature on target tracking, motion correspondence etc. See e.g. [Cox 93]
77 MHT: Implemented System (1) [Jensfelt and Kristensen 99,01] Hypotheses are extracted from LRF scans Each hypothesis has probability of being the correct one: H i = { xˆ i, Σ i, P ( H i )} Hypothesis probability is computed using Bayes rule P( s H i ) P( H i ) P( H i s) = P( s) Hypotheses with low probability are deleted New candidates are extracted from LRF scans C j = {z j, R j }
78 MHT: Implemented System (2) Courtesy of P. Jensfelt and S. Kristensen
79 MHT: Implemented System (3) Example run # hypotheses P(Hbest) Map and trajectory Courtesy of P. Jensfelt and S. Kristensen Hypotheses vs. time
80 Localization Algorithms - Comparison Kalman filter Multi-hypot hesis tracking Sensors Gaussian Gaussian Posterior Gaussian Multi-modal Efficiency (memory) Efficiency (time) Implementation + o Robustness - + Global localization No Yes Accuracy
81 Piecewise Constant [Burgard et al. 96,98], [Fox et al. 99], [Konolige et al. 99]
82 Piecewise Constant Representation bel ( xt =< x, y,θ >)
83 Grid-based Localization
84 Tree-based Representations (1) Idea: Represent density using a variant of Octrees
85 Tree-based Representations (2) Efficient in space and time Multi-resolution
86 Localization Algorithms - Comparison Kalman filter Multi-hypot hesis tracking (fixed/variable) Sensors Gaussian Gaussian Non-Gaussian Posterior Gaussian Multi-modal Piecewise constant Efficiency (memory) /+ Efficiency (time) o/+ Implementation + o +/o /++ Robustness Global localization No Yes Yes Accuracy Grid-based
87 Localization Algorithms - Comparison Kalman filter Multi-hypot hesis tracking (fixed/variable) Topological maps Sensors Gaussian Gaussian Non-Gaussian Features Posterior Gaussian Multi-modal Piecewise constant Piecewise constant Efficiency (memory) /+ ++ Efficiency (time) o/+ ++ Implementation + o +/o +/o /++ - Robustness Global localization No Yes Yes Yes Accuracy Grid-based
88 Particle Filters Represent density by random samples Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96] Estimation of non-gaussian, nonlinear processes Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]
89 Monte Carlo Localization (MCL) Represent Density Through Samples
90 Importance Sampling Weight samples: f w= g
91 MCL: Global Localization
92 MCL: Sensor Update Bel ( x) α p ( z x) Bel ( x) α p( z x) Bel ( x) w = α p( z x) Bel ( x)
93 MCL: Robot Motion Bel ( x) p( x u x' ) Bel ( x' ), d x'
94 MCL: Sensor Update Bel ( x) α p ( z x) Bel ( x) α p( z x) Bel ( x) w = α p( z x) Bel ( x)
95 MCL: Robot Motion Bel ( x) p( x u x' ) Bel ( x' ), d x'
96 Particle Filter Algorithm 1. Algorithm particle_filter( St-1, ut-1 zt): S t =, For η =0 Generate new samples i = 1 n Sample index j(i) from the discrete distribution given by wt-1 1. Sample xti from p ( xt xt 1, ut 1 ) using xtj (1i ) and 2. wti = p ( zt xti ) Compute importance weight 3. η = η + wti Update normalization factor 4. St = St {< xti, wti >} Insert 5. For 6. i = 1 n wti = wti / η Normalize weights ut 1
97 Particle Filter Algorithm Bel ( xt ) = η p( zt xt ) p( xt xt 1, ut 1 ) Bel ( xt 1 ) dxt 1 draw xit 1 from Bel(xt 1) draw xit from p(xt xit 1,ut 1) Importance factor for xit: target distribution w = proposal distribution η p ( zt xt ) p ( xt xt 1, ut 1 ) Bel ( xt 1 ) = p ( xt xt 1, ut 1 ) Bel ( xt 1 ) p( zt xt ) i t
98 Resampling Given: Set S of weighted samples. Wanted : Random sample, where the probability of drawing xi is given by wi. Typically done n times with replacement to generate new sample set S.
99 Resampling Algorithm 1. Algorithm systematic_resampling(s,n): 1. S ' =, c1 = w1 2. For i = 2 n Generate cdf ci = ci 1 + w u1 ~ U [0, n ], i = 1 Initialize threshold i For j = 1 n Draw samples u j = u1 + n 1 ( j 1) Advance threshold While ( u j > ci) Skip until next threshold reached i = i +1 { } S ' = S ' < x i, n 1 > Insert 1. Return S Also called stochastic universal sampling
100 Motion Model p(xt at-1, xt-1) Model odometry error as Gaussian noise on α, β, and δ
101 Motion Model p(xt at-1, xt-1) Start
102 Model for Proximity Sensors The sensor is reflected either by a known or by an unknown obstacle: Laser sensor Sonar sensor
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122 Recovery from Failure Problem: Samples are highly concentrated during tracking True location is not covered by samples if position gets lost Solutions: Add uniformly distributed samples [Fox et al., 99] Draw samples according to observation density [Lenser et al.,00; Thrun et al., 00]
123 MCL: Recovery from Failure [Fox et al. 99]
124 The RoboCup Challenge Dynamic, adversarial environments Limited computational power Multi-robot collaboration Particle filters allow efficient localization [Lenser et al. 00]
125 Using Ceiling Maps for Localization [Dellaert et al. 99]
126 Vision-based Localization P(s x) s h(x)
127 Under a Light Measurement: Resulting density:
128 Next to a Light Measurement: Resulting density:
129 Elsewhere Measurement: Resulting density:
130 MCL: Global Localization Using Vision
131 Vision-based Localization Odometry only: Vision: Laser:
132 Localization for AIBO robots
133 Adaptive Sampling
134 MCL: Adaptive Sampling (Sonar)
135 MCL: Adaptive Sampling (Laser)
136 Performance Comparison Grid-based localization Monte Carlo localization
137 Particle Filters for Robot Localization (Summary) Approximate Bayes Estimation/Filtering Full posterior estimation Converges in O(1/ #samples) [Tanner 93] Robust: multiple hypotheses with degree of belief Efficient in low-dimensional spaces: focuses computation where needed Any-time: by varying number of samples Easy to implement
138 Localization Algorithms - Comparison Kalman filter Multi-hypot hesis tracking Topological maps Grid-based Particle filter (fixed/variable) Features Non-Gaussian Non-Gaus sian Sensors Gaussian Posterior Gaussian Multi-modal Piecewise constant Piecewise constant Samples Efficiency (memory) /+ +/++ Efficiency (time) o/+ +/++ Implementation + o + +/o /++ ++ Robustness /++ Global localization No Yes Yes Yes Accuracy
139 Bayes Filtering: Lessons Learned General algorithm for recursively estimating the state of dynamic systems. Variants: Hidden Markov Models (Extended) Kalman Filters Discrete Filters Particle Filters
Introduction to Mobile Robotics Probabilistic Robotics
Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action
More informationMarkov localization uses an explicit, discrete representation for the probability of all position in the state space.
Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole
More informationIntroduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the
More informationProbabilistic Robotics
Probabilistic Robotics Overview of probability, Representing uncertainty Propagation of uncertainty, Bayes Rule Application to Localization and Mapping Slides from Autonomous Robots (Siegwart and Nourbaksh),
More informationModeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems
Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities
More informationProbabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.
robabilistic Robotics Slides from Autonomous Robots Siegwart and Nourbaksh Chapter 5 robabilistic Robotics S. Thurn et al. Today Overview of probability Representing uncertainty ropagation of uncertainty
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic
More informationProbabilistic Fundamentals in Robotics
Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino June 2011 Course Outline Basic mathematical framework Probabilistic
More informationBayesian Methods / G.D. Hager S. Leonard
Bayesian Methods Recall Robot Localization Given Sensor readings z 1, z 2,, z t = z 1:t Known control inputs u 0, u 1, u t = u 0:t Known model t+1 t, u t ) with initial 1 u 0 ) Known map z t t ) Compute
More informationMobile Robotics II: Simultaneous localization and mapping
Mobile Robotics II: Simultaneous localization and mapping Introduction: probability theory, estimation Miroslav Kulich Intelligent and Mobile Robotics Group Gerstner Laboratory for Intelligent Decision
More information2D Image Processing (Extended) Kalman and particle filter
2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz
More informationParticle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationIntroduction to Mobile Robotics Bayes Filter Kalman Filter
Introduction to Mobile Robotics Bayes Filter Kalman Filter Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Bayes Filter Reminder 1. Algorithm Bayes_filter( Bel(x),d ):
More informationCS491/691: Introduction to Aerial Robotics
CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods
Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationRobotics. Mobile Robotics. Marc Toussaint U Stuttgart
Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand
More informationL09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms
L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state
More informationComputer Vision Group Prof. Daniel Cremers. 14. Sampling Methods
Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Particle Filter for Localization Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, and the book Probabilistic Robotics from Thurn et al.
More informationSensor Tasking and Control
Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities
More informationEKF and SLAM. McGill COMP 765 Sept 18 th, 2017
EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:
More informationMathematical Formulation of Our Example
Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot
More informationRobots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo
Robots Autónomos 2. Bayesian Estimation and sensor models Domingo Gallardo Depto. CCIA http://www.rvg.ua.es/master/robots References Recursive State Estimation: Thrun, chapter 2 Sensor models and robot
More informationThe Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision
The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that
More informationL11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms
L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation
More informationContent.
Content Fundamentals of Bayesian Techniques (E. Sucar) Bayesian Filters (O. Aycard) Definition & interests Implementations Hidden Markov models Discrete Bayesian Filters or Markov localization Kalman filters
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationIntroduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping
Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz What is SLAM? Estimate the pose of a robot and the map of the environment
More informationBayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.
Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate
More informationCSE-473. A Gentle Introduction to Particle Filters
CSE-473 A Genle Inroducion o Paricle Filers Bayes Filers for Robo Localizaion Dieer Fo 2 Bayes Filers: Framework Given: Sream of observaions z and acion daa u: d Sensor model Pz. = { u, z2, u 1, z 1 Dynamics
More informationSensor Fusion: Particle Filter
Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state
More informationParticle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics
Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error
More informationRobot Localization and Kalman Filters
Robot Localization and Kalman Filters Rudy Negenborn rudy@negenborn.net August 26, 2003 Outline Robot Localization Probabilistic Localization Kalman Filters Kalman Localization Kalman Localization with
More informationDipartimento di Elettronica Informazione e Bioingegneria Cognitive Robotics
Dipartimento di Elettronica Informaione e Bioingegneria Cognitive Robotics robabilistic self localiation and mapping @ 205 Implementations of rational agent 2 Today implementations of rational agents are
More informationRobot Localisation. Henrik I. Christensen. January 12, 2007
Robot Henrik I. Robotics and Intelligent Machines @ GT College of Computing Georgia Institute of Technology Atlanta, GA hic@cc.gatech.edu January 12, 2007 The Robot Structure Outline 1 2 3 4 Sum of 5 6
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile
More informationChapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager
Chapter 9: Bayesian Methods CS 336/436 Gregory D. Hager A Simple Eample Why Not Just Use a Kalman Filter? Recall Robot Localization Given Sensor readings y 1, y 2, y = y 1: Known control inputs u 0, u
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions
More informationAUTOMOTIVE ENVIRONMENT SENSORS
AUTOMOTIVE ENVIRONMENT SENSORS Lecture 5. Localization BME KÖZLEKEDÉSMÉRNÖKI ÉS JÁRMŰMÉRNÖKI KAR 32708-2/2017/INTFIN SZÁMÚ EMMI ÁLTAL TÁMOGATOTT TANANYAG Related concepts Concepts related to vehicles moving
More informationCS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering
CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering Cengiz Günay, Emory Univ. Günay Ch. 15,20 Hidden Markov Models and Particle FilteringSpring 2013 1 / 21 Get Rich Fast!
More informationPartially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS
Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS Many slides adapted from Jur van den Berg Outline POMDPs Separation Principle / Certainty Equivalence Locally Optimal
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic
More informationIntroduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras
Introduction to Mobile Robotics Information Gain-Based Exploration Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Tasks of Mobile Robots mapping SLAM localization integrated
More informationL06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms
L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian
More informationRobotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard
Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications
More informationApproximate Inference
Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate
More informationProbability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles
Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Myungsoo Jun and Raffaello D Andrea Sibley School of Mechanical and Aerospace Engineering Cornell University
More informationRobotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard
Robotics 2 Data Association Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Data Association Data association is the process of associating uncertain measurements to known tracks. Problem
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationMarkov Models and Hidden Markov Models
Markov Models and Hidden Markov Models Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA Markov Models We have already seen that an MDP provides
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Hidden Markov Models Instructor: Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, and Anca. http://ai.berkeley.edu.]
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robotic mapping Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic
More informationPROBABILISTIC REASONING OVER TIME
PROBABILISTIC REASONING OVER TIME In which we try to interpret the present, understand the past, and perhaps predict the future, even when very little is crystal clear. Outline Time and uncertainty Inference:
More informationVlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Probabilistic Map-based Localization (Kalman Filter) Chapter 5 (textbook) Based on textbook
More informationAdvanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering
Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London
More informationKalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein
Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time
More informationVariable Resolution Particle Filter
In Proceedings of the International Joint Conference on Artificial intelligence (IJCAI) August 2003. 1 Variable Resolution Particle Filter Vandi Verma, Sebastian Thrun and Reid Simmons Carnegie Mellon
More informationA Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems
A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems Daniel Meyer-Delius 1, Christian Plagemann 1, Georg von Wichert 2, Wendelin Feiten 2, Gisbert Lawitzky 2, and
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationWhy do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationHidden Markov Models. Vibhav Gogate The University of Texas at Dallas
Hidden Markov Models Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1
More informationSLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada
SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM
More informationResults: MCMC Dancers, q=10, n=500
Motivation Sampling Methods for Bayesian Inference How to track many INTERACTING targets? A Tutorial Frank Dellaert Results: MCMC Dancers, q=10, n=500 1 Probabilistic Topological Maps Results Real-Time
More informationHuman Pose Tracking I: Basics. David Fleet University of Toronto
Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated
More informationEfficient Monitoring for Planetary Rovers
International Symposium on Artificial Intelligence and Robotics in Space (isairas), May, 2003 Efficient Monitoring for Planetary Rovers Vandi Verma vandi@ri.cmu.edu Geoff Gordon ggordon@cs.cmu.edu Carnegie
More informationOutline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012
CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationProbability: Review. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
robabilit: Review ieter Abbeel UC Berkele EECS Man slides adapted from Thrun Burgard and Fo robabilistic Robotics Wh probabilit in robotics? Often state of robot and state of its environment are unknown
More informationLecture 6: Bayesian Inference in SDE Models
Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs
More informationCIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions
CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the
More informationRL 14: Simplifications of POMDPs
RL 14: Simplifications of POMDPs Michael Herrmann University of Edinburgh, School of Informatics 04/03/2016 POMDPs: Points to remember Belief states are probability distributions over states Even if computationally
More informationSimultaneous Localization and Mapping Techniques
Simultaneous Localization and Mapping Techniques Andrew Hogue Oral Exam: June 20, 2005 Contents 1 Introduction 1 1.1 Why is SLAM necessary?.................................. 2 1.2 A Brief History of SLAM...................................
More informationFrom Bayes to Extended Kalman Filter
From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Hidden Markov Models Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationSimultaneous Localization and Mapping
Simultaneous Localization and Mapping Miroslav Kulich Intelligent and Mobile Robotics Group Czech Institute of Informatics, Robotics and Cybernetics Czech Technical University in Prague Winter semester
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 18: HMMs and Particle Filtering 4/4/2011 Pieter Abbeel --- UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore
More informationMobile Robots Localization
Mobile Robots Localization Institute for Software Technology 1 Today s Agenda Motivation for Localization Odometry Odometry Calibration Error Model 2 Robotics is Easy control behavior perception modelling
More informationMarkov chain Monte Carlo methods for visual tracking
Markov chain Monte Carlo methods for visual tracking Ray Luo rluo@cory.eecs.berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720
More informationMarkov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.
CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationWhy do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationParticle lter for mobile robot tracking and localisation
Particle lter for mobile robot tracking and localisation Tinne De Laet K.U.Leuven, Dept. Werktuigkunde 19 oktober 2005 Particle lter 1 Overview Goal Introduction Particle lter Simulations Particle lter
More informationRao-Blackwellized Particle Filter for Multiple Target Tracking
Rao-Blackwellized Particle Filter for Multiple Target Tracking Simo Särkkä, Aki Vehtari, Jouko Lampinen Helsinki University of Technology, Finland Abstract In this article we propose a new Rao-Blackwellized
More informationIntroduction to Mobile Robotics Probabilistic Sensor Models
Introduction to Mobile Robotics Probabilistic Sensor Models Wolfram Burgard 1 Sensors for Mobile Robots Contact sensors: Bumpers Proprioceptive sensors Accelerometers (spring-mounted masses) Gyroscopes
More informationRobotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.
Robotics Lecture 4: Probabilistic Robotics See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Sensors
More informationLecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations
Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model
More informationCSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.
CSE 473: Ar+ficial Intelligence Markov Models - II Daniel S. Weld - - - University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188
More informationBayesian Methods in Artificial Intelligence
WDS'10 Proceedings of Contributed Papers, Part I, 25 30, 2010. ISBN 978-80-7378-139-2 MATFYZPRESS Bayesian Methods in Artificial Intelligence M. Kukačka Charles University, Faculty of Mathematics and Physics,
More informationA Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization
A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization and Timothy D. Barfoot CRV 2 Outline Background Objective Experimental Setup Results Discussion Conclusion 2 Outline
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.
More informationCSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:
CSE 473: Ar+ficial Intelligence Par+cle Filters for HMMs Daniel S. Weld - - - University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All
More informationRL 14: POMDPs continued
RL 14: POMDPs continued Michael Herrmann University of Edinburgh, School of Informatics 06/03/2015 POMDPs: Points to remember Belief states are probability distributions over states Even if computationally
More information