Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems
|
|
- Oliver Miller
- 5 years ago
- Views:
Transcription
1 Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017
2 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities 5 Bayes filter 6 Particle filter
3 Modeling The model represents the current state of the environment.
4 Modeling The model represents the current state of the environment. All sensors of a physical robot are noisy. The model can never be exact.
5 Modeling The model represents the current state of the environment. All sensors of a physical robot are noisy. The model can never be exact. Robots can only estimate states using probabilistic methods for example.
6 State estimation Determines a state X t that changes over time using a sequence of measurements z t and u t. z t: measurement u t: state transition measurement
7 State estimation Determines a state X t that changes over time using a sequence of measurements z t and u t. z t: measurement u t: state transition measurement Useful if a state can not be accurately and directly measured (which means every state for a physical robot).
8 State estimation Determines a state X t that changes over time using a sequence of measurements z t and u t. z t: measurement u t: state transition measurement Useful if a state can not be accurately and directly measured (which means every state for a physical robot). filter noise
9 State estimation Determines a state X t that changes over time using a sequence of measurements z t and u t. z t: measurement u t: state transition measurement Useful if a state can not be accurately and directly measured (which means every state for a physical robot). filter noise infer a state from measurements
10 State estimation Determines a state X t that changes over time using a sequence of measurements z t and u t. z t: measurement u t: state transition measurement Useful if a state can not be accurately and directly measured (which means every state for a physical robot). filter noise infer a state from measurements Modeling in our soccer agent Ball tracking, opponent localization (and teammates), self-localization, orientation estimation (upright vector).
11 Examples
12 Examples How noisy can measurements be?
13 Examples How noisy can measurements be? How can a state estimation be robust despite all the errors?
14 Example 1 RoboCup Small-Size League:
15 Example 1 RoboCup Small-Size League: x,y positions as measurement z t.
16 Example 1 RoboCup Small-Size League: x,y positions as measurement z t. Only noisy measurements, we need the actual state (the model).
17 Example 1 RoboCup Small-Size League: x,y positions as measurement z t. Only noisy measurements, we need the actual state (the model).
18 Example 1 RoboCup Small-Size League: x,y positions as measurement z t. Only noisy measurements, we need the actual state (the model). Problem with two robots: wrong perceptions on other robot.
19 Example 2 Obstacle avoidance using a laser range finder: There can be several different errors in the measurements. source: slides from
20 Example 2 Obstacle avoidance using a laser range finder: There can be several different errors in the measurements. source: slides from
21 Example 2 Obstacle avoidance using a laser range finder: There can be several different errors in the measurements. source: slides from
22 Example 2 Obstacle avoidance using a laser range finder: There can be several different errors in the measurements. The general model for a beam based sensor is a mixture of several distributions. source: slides from
23 Example 2 Obstacle avoidance using a laser range finder: There can be several different errors in the measurements. The general model for a beam based sensor is a mixture of several distributions. Knowledge about the behavior of a sensor (the sensor model) is very important for a robust state estimation. source: slides from
24 Example 3 3D ball-tracking with a camera:
25 Example 3 3D ball-tracking with a camera: Uncertainty, especially the distance of the ball to the camera.
26 Example 3 3D ball-tracking with a camera: Uncertainty, especially the distance of the ball to the camera. State in world coordinates and should include the velocity. A single observation does not contain much information.
27 Example 3 3D ball-tracking with a camera: Uncertainty, especially the distance of the ball to the camera. State in world coordinates and should include the velocity. A single observation does not contain much information. Consider only possible trajectories to reduce uncertainty.
28 Example 3 3D ball-tracking with a camera: Uncertainty, especially the distance of the ball to the camera. State in world coordinates and should include the velocity. A single observation does not contain much information. Consider only possible trajectories to reduce uncertainty. Knowledge about the behavior of the ball and physics is useful (state transition model).
29 Example 4 Self-localization in 1D with limited sensors:
30 Example 4 Self-localization in 1D with limited sensors:
31 Example 4 Self-localization in 1D with limited sensors: Door sensor ambiguous. Even a sequence of measurements z t is not enough to localize.
32 Example 4 Self-localization in 1D with limited sensors: Door sensor ambiguous. Even a sequence of measurements z t is not enough to localize. Another sensor needed: sensor to measure wheel rotations.
33 Example 4 Self-localization in 1D with limited sensors: Door sensor ambiguous. Even a sequence of measurements z t is not enough to localize. Another sensor needed: sensor to measure wheel rotations. Measurements u t needed (odometry motion model).
34 General state estimation
35 General state estimation For one given observation there is a high uncertainty and ambiguity.
36 General state estimation For one given observation there is a high uncertainty and ambiguity. The state estimation gets a sequence of measurements, so the estimation of X t is based on all measurements z 0,..., z t and u 0,..., u t.
37 General state estimation General state estimation:
38 General state estimation General state estimation:
39 General state estimation General state estimation: Problems: more measurements with every time time step increasing amount of computation.
40 Markov assumptions Markov assumption 1: The measurement z t depends only on the state X t and a random error. Markov assumption 2: The state transition measurement u t only depends on the states X t and X t+1 and a random error.
41 Markov process Bayesian network with the measurements u t and z t : x t 1 x t x t+1 ut 1 ut u t+1 z t 1 z t z t+1 The states x t are hidden.
42 Recursive state estimation / filter Recursive state estimation:
43 Recursive state estimation / filter Recursive state estimation: X t includes all the knowledge from the measurements before.
44 Recursive state estimation / filter Recursive state estimation: X t includes all the knowledge from the measurements before. Needed for X t is only X t 1, z t and u t.
45 Recursive state estimation / filter Recursive state estimation: X t includes all the knowledge from the measurements before. Needed for X t is only X t 1, z t and u t. Belief X t is updated using only the new measurements constant time for each step.
46 State estimation Sensor model and state transition model needed. Update belief X t using z t and sensor model. u t and motion model and knowledge about dynamics in the environment.
47 Example state estimation
48 Example state estimation
49 Example state estimation
50 Example state estimation
51 Example state estimation
52 Example state estimation
53 Example state estimation
54 Example state estimation
55 Example state estimation
56 Example 1: Small-Size League State, z t, u t, the sensor model and prediction?
57 Example 1: Small-Size League State, z t, u t, the sensor model and prediction? State: position x, y, θ and speed x, y, θ
58 Example 1: Small-Size League State, z t, u t, the sensor model and prediction? State: position x, y, θ and speed x, y, θ z t : x,y,θ
59 Example 1: Small-Size League State, z t, u t, the sensor model and prediction? State: position x, y, θ and speed x, y, θ z t : x,y,θ u t : Driving command sent to the robot.
60 Example 1: Small-Size League State, z t, u t, the sensor model and prediction? State: position x, y, θ and speed x, y, θ z t : x,y,θ u t : Driving command sent to the robot. Sensor model: Gaussian distribution around robot Maybe also small probabilities at other robots
61 Example 1: Small-Size League State, z t, u t, the sensor model and prediction? State: position x, y, θ and speed x, y, θ z t : x,y,θ u t : Driving command sent to the robot. Sensor model: Gaussian distribution around robot Maybe also small probabilities at other robots Prediction using X t 1, u t, odometry motion model
62 Example 3: Ball tracking State, z t, u t, the sensor model and prediction?
63 Example 3: Ball tracking State, z t, u t, the sensor model and prediction? state: position x, y, z and velocity x, y, z
64 Example 3: Ball tracking State, z t, u t, the sensor model and prediction? state: position x, y, z and velocity x, y, z z t : image x, y
65 Example 3: Ball tracking State, z t, u t, the sensor model and prediction? state: position x, y, z and velocity x, y, z z t : image x, y u t : none
66 Example 3: Ball tracking State, z t, u t, the sensor model and prediction? state: position x, y, z and velocity x, y, z z t : image x, y u t : none Sensor model: transformation from state to image, Gaussian distribution in image
67 Example 3: Ball tracking State, z t, u t, the sensor model and prediction? state: position x, y, z and velocity x, y, z z t : image x, y u t : none Sensor model: transformation from state to image, Gaussian distribution in image Prediction: state transition model using physics
68 Bayes filter Previous slides have shown the principle of a Bayes filter. Why does this work exactly? Probabilities Bayes rule Recursive Bayesian estimation Source for the following slides: Thrun et al., Probabilistic Robotics;
69 Discrete random variables X denotes a random variable. X can take on a countable number of values in {x 1, x 2,..., x n }. P(X = x i ) is the probability that X takes on value x i.
70 Continuous random variables X takes on values in the continuum. p(x = x) (or short p(x)) is a probability density function. b Example: Pr(x [a, b]) = p(x)dx a
71 Joint and Conditional Probabilities P(X = x and Y = y) = P(x, y). If X and Y are independent then P(x, y) = P(x)P(y). P(x y) is the probability of x given y. If X and Y are independent then P(x y) = P(x).
72 Law of total probability Discrete case: P(x) = 1 x P(x) = P(x, y) y P(x) = P(x y)p(y) y
73 Law of total probability Discrete case: P(x) = 1 x P(x) = P(x, y) y Continuous case: p(x)dx = 1 p(x) = p(x, y)dy P(x) = y P(x y)p(y) p(x) = p(x y)p(y)dy
74 Bayes rule p(x y)p(y) = p(x, y) = p(y x)p(x)
75 Bayes rule p(x y)p(y) = p(x, y) = p(y x)p(x) p(x y) = p(y x)p(x) p(y)
76 Bayes rule p(x y)p(y) = p(x, y) = p(y x)p(x) p(x y) = p(y x)p(x) p(y) const y p(y x)p(x)
77 Bayes rule p(x y)p(y) = p(x, y) = p(y x)p(x) p(x y) = p(y x)p(x) p(y) const y p(y x)p(x) Bayes rule with background knowledge: p(x y, z) = p(y x, z)p(x z) p(y z)
78 Example for a simple measurement The robot obtains the measurement z. What is P(open z)?
79 Diagnostic vs. causal reasoning P(open z) is diagnostic. P(z open) is causal. Often the causal knowledge is much easier to obtain (the sensor models).
80 Diagnostic vs. causal reasoning P(open z) is diagnostic. P(z open) is causal. Often the causal knowledge is much easier to obtain (the sensor models). The bayes rule allows us to use causal knowledge to get P(open z): P(open z) = P(z open)p(open) P(z)
81 Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5
82 Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 P(open z) = P(z open)p(open) P(z)
83 Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 P(open z) = P(z open)p(open) P(z) P(open z) = P(z open)p(open) P(z open)p(open) + P(z open)p( open)
84 Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 P(open z) = P(z open)p(open) P(z) P(open z) = P(open z) = P(z open)p(open) P(z open)p(open) + P(z open)p( open) =
85 Example P(z open) = 0.6 P(z open) = 0.3 P(open) = P( open) = 0.5 P(open z) = P(z open)p(open) P(z) P(open z) = P(open z) = P(z open)p(open) P(z open)p(open) + P(z open)p( open) = The measurement z raises the probability that the door is open.
86 Actions Actions increase uncertainty.
87 Actions Actions increase uncertainty. Update belief with action model (e.g. odometry, motion model): P(x u, x )
88 Actions Actions increase uncertainty. Update belief with action model (e.g. odometry, motion model): P(x u, x ) Outcome of actions: Discrete: P(x u) = x P(x u, x )P(x )
89 Actions Actions increase uncertainty. Update belief with action model (e.g. odometry, motion model): P(x u, x ) Outcome of actions: Discrete: P(x u) = x P(x u, x )P(x ) Continuous: p(x u) = p(x u, x )p(x )dx
90 Markov assumptions Measurement z t only depends on x t : p(z t x t,...) = p(z t x t )
91 Markov assumptions Measurement z t only depends on x t : p(z t x t,...) = p(z t x t ) State x t only depends on x t 1 and u t 1 : p(x t u t 1, x t 1,...) = p(x t u t 1, x t 1 )
92 Bayes filter Given: Measurements z 1,..., z t and action data/transition measurements u 1,..., u t. Sensor model: p(z x). Action model: p(x u, x ). Prior probability of the state p(x). Wanted: Belief of the state: Bel(x t) = p(x t z t, u t 1,..., u 1, z 1)
93 Recursive Bayesian estimation Bel(x t) = p(x t z t, u t 1, z t 1,...)
94 Recursive Bayesian estimation Bel(x t) = p(x t z t, u t 1, z t 1,...) Bayes = p(zt xt, ut 1, zt 1,...)p(xt ut 1, zt 1,...) p(z t u t 1, z t 1,...)
95 Recursive Bayesian estimation Bel(x t) = p(x t z t, u t 1, z t 1,...) Bayes p(zt xt, ut 1, zt 1,...)p(xt ut 1, zt 1,...) = p(z t u t 1, z t 1,...) z t const. = ηp(z t x t, u t 1, z t 1,...)p(x t u t 1, z t 1,...)
96 Recursive Bayesian estimation Bel(x t) = p(x t z t, u t 1, z t 1,...) Bayes p(zt xt, ut 1, zt 1,...)p(xt ut 1, zt 1,...) = p(z t u t 1, z t 1,...) z t const. = ηp(z t x t, u t 1, z t 1,...)p(x t u t 1, z t 1,...) Markov = ηp(z t x t)p(x t u t 1, z t 1,...)
97 Recursive Bayesian estimation Bel(x t) = p(x t z t, u t 1, z t 1,...) Bayes p(zt xt, ut 1, zt 1,...)p(xt ut 1, zt 1,...) = p(z t u t 1, z t 1,...) z t const. = ηp(z t x t, u t 1, z t 1,...)p(x t u t 1, z t 1,...) Markov = ηp(z t x t)p(x t u t 1, z t 1,...) Total prob. = ηp(z t x t) p(x t x t 1, u t 1, z t 1,...)p(x t 1 u t 1, z t 1,...)dx t 1
98 Recursive Bayesian estimation Bel(x t) = p(x t z t, u t 1, z t 1,...) Bayes p(zt xt, ut 1, zt 1,...)p(xt ut 1, zt 1,...) = p(z t u t 1, z t 1,...) z t const. = ηp(z t x t, u t 1, z t 1,...)p(x t u t 1, z t 1,...) Markov = ηp(z t x t)p(x t u t 1, z t 1,...) Total prob. = ηp(z t x t) p(x t x t 1, u t 1, z t 1,...)p(x t 1 u t 1, z t 1,...)dx t 1 Markov = ηp(z t x t) p(x t u t 1, x t 1)p(x t 1 z t 1, u t 2...)dx t 1
99 Recursive Bayesian estimation Bel(x t) = p(x t z t, u t 1, z t 1,...) Bayes p(zt xt, ut 1, zt 1,...)p(xt ut 1, zt 1,...) = p(z t u t 1, z t 1,...) z t const. = ηp(z t x t, u t 1, z t 1,...)p(x t u t 1, z t 1,...) Markov = ηp(z t x t)p(x t u t 1, z t 1,...) Total prob. = ηp(z t x t) p(x t x t 1, u t 1, z t 1,...)p(x t 1 u t 1, z t 1,...)dx t 1 Markov = ηp(z t x t) p(x t u t 1, x t 1)p(x t 1 z t 1, u t 2...)dx t 1 = ηp(z t x t) p(x t u t 1, x t 1)Bel(x t 1)
100 Bayes filter implementations Bel(x t ) = ηp(z t x t ) p(x t u t 1, x t 1 )Bel(x t 1 )
101 Bayes filter implementations Bel(x t ) = ηp(z t x t ) p(x t u t 1, x t 1 )Bel(x t 1 ) Some methods based on this equation: Grid-based estimator Kalman filter Particle filter
102 Grid-based estimator Probability density function (belief) is represented using a discretized state space. Can be a simple grid with a constant step size. Tree-based methods using e.g. octrees for more efficiency.
103 Grid-based estimator Can be useful e.g. for localizations using a grid-based environment map.
104 Kalman filter The belief is represented by multivariate normal distributions. Very efficient. Optimal for linear Gaussian systems.
105 Kalman filter The belief is represented by multivariate normal distributions. Very efficient. Optimal for linear Gaussian systems. Most robotics systems are nonlinear. Limited to Gaussian distributions.
106 Kalman filter The belief is represented by multivariate normal distributions. Very efficient. Optimal for linear Gaussian systems. Most robotics systems are nonlinear. Limited to Gaussian distributions. Extensions of the Kalman Filter for nonlinearity: Extended Kalman Filter Unscented Kalman Filter
107 Particle filter Belief represented by samples (particles). State estimation for non-gaussian, nonlinear systems.
108 Particle filter Belief represented by samples (particles). State estimation for non-gaussian, nonlinear systems. Particles have weights. A high probability in a given region can be represented by many particles. few particles with higher weights.
109 Importance sampling Suppose we want to approximate a target density f.
110 Importance sampling Assume we can only draw samples from a density g.
111 Importance sampling The target density f can be approximated by attaching the weight w = f (x)/g(x) to each sample x.
112 Example Monte Carlo localization Sensor information (importance sampling) Bel(x) αp(z x)bel(x)
113 Example Monte Carlo localization Sensor information (importance sampling) Bel(x) αp(z x)bel(x) w αp(z x)bel(x) Bel(x) = αp(z x)
114 Example Monte Carlo localization Sensor information (importance sampling) Bel(x) αp(z x)bel(x) w αp(z x)bel(x) Bel(x) = αp(z x)
115 Example Monte Carlo localization Robot motion (resampling and prediction) Bel(x) p(x u, x )Bel(x )dx
116 Example Monte Carlo localization Robot motion (resampling and prediction) Bel(x) p(x u, x )Bel(x )dx
117 Example Monte Carlo localization Sensor information (importance sampling): Bel(x) αp(z x)bel(x) w αp(z x)bel(x) Bel(x) = αp(z x)
118 Example Monte Carlo localization Sensor information (importance sampling): Bel(x) αp(z x)bel(x) w αp(z x)bel(x) Bel(x) = αp(z x)
119 Example Monte Carlo localization Robot motion (resampling and prediction): Bel(x) p(x u, x )Bel(x )dx
120 Example Monte Carlo localization Robot motion (resampling and prediction): Bel(x) p(x u, x )Bel(x )dx
121 Particle filter steps State transition/prediction: Sample new particles using p(x u t 1, x t 1 ). In the context of localization: Move particles according to a motion model. Sensor update: Set particle weights using the likelihood p(z x). Resampling: Draw new samples from the old particles according to their weights.
122 Particle filter algorithm 1: procedure particle filter(s t 1, u t 1, z t ) 2: S t, η 0 3: for i = 1,..., n do Generate new samples 4: Sample index j(i) from disc. distr. given by w t 1 5: Sample from p(x t x t 1, u t 1 ) using x j(i) t 1 and u t 1 6: wt i p(z t xt) i Compute importance weight 7: η η + wt i Update normalization factor 8: S t S t {< x i t, wt i >} Insert 9: end for 10: for i = 1,..., n do 11: wt i wt i /η Normalize weights 12: end for 13: end procedure
123 Resampling
124 Resampling
125 Resampling
126 Resampling
127 Resampling
128 Resampling Binary search, n log n High variance
129 Resampling Binary search, n log n High variance
130 Resampling Binary search, n log n High variance
131 Resampling Binary search, n log n High variance
132 Resampling Binary search, n log n High variance Systematic resampling Stochastic universal sampling Linear time complexity Low variance
133 Source: Murray, Lawrence M., Anthony Lee, and Pierre E. Jacob. Parallel resampling in the particle filter. arxiv preprint arxiv: (2013).
134 Resampling algorithm 1: procedure systematic resampling(s, n) 2: S, c 1 w 1 3: for i = 2,..., n do Generate cdf 4: c i c i 1 + w i 5: u 1 U]0, n 1 ], i 1 Initialize threshold 6: end for 7: for j = 1,..., n do Draw samples 8: while u j > c i do Skip until next threshold reached 9: i i : S S {< x i, n 1 } Insert 11: u j+1 u j + n Increment threshold 12: end while 13: end for 14: Return S 15: end procedure Also called: stochastic universal resampling
135 Summary Particle filters are an implementation of a recursive Bayesian filter. Belief is represented by a set of weighted samples. Samples can approximate arbitrary probability distributions. Works for non-gaussian, nonlinear systems. Relatively easy to implement. Depending on the state space a large number of particles might be needed. Re-sampling step: new particles are drawn with a probability proportional to the likelihood of the observation.
136 Problems Global localization problem (initial position). Robot kidnapping problem.
137 Problems Global localization problem (initial position). Robot kidnapping problem. Augmented Monte Carlo Localization: Inject new particles when the average weight decreases. New random particles or particles based on current perception.
138 Acknowledgement Acknowledgement The slides for this lecture have been prepared by Andreas Seekircher.
Introduction to Mobile Robotics Probabilistic Robotics
Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action
More informationMarkov localization uses an explicit, discrete representation for the probability of all position in the state space.
Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole
More informationMobile Robotics II: Simultaneous localization and mapping
Mobile Robotics II: Simultaneous localization and mapping Introduction: probability theory, estimation Miroslav Kulich Intelligent and Mobile Robotics Group Gerstner Laboratory for Intelligent Decision
More informationRecursive Bayes Filtering
Recursive Bayes Filtering CS485 Autonomous Robotics Amarda Shehu Fall 2013 Notes modified from Wolfram Burgard, University of Freiburg Physical Agents are Inherently Uncertain Uncertainty arises from four
More informationIntroduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the
More informationParticle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates
More informationRobots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo
Robots Autónomos 2. Bayesian Estimation and sensor models Domingo Gallardo Depto. CCIA http://www.rvg.ua.es/master/robots References Recursive State Estimation: Thrun, chapter 2 Sensor models and robot
More information2D Image Processing (Extended) Kalman and particle filter
2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz
More informationProbabilistic Robotics
Probabilistic Robotics Overview of probability, Representing uncertainty Propagation of uncertainty, Bayes Rule Application to Localization and Mapping Slides from Autonomous Robots (Siegwart and Nourbaksh),
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Particle Filter for Localization Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, and the book Probabilistic Robotics from Thurn et al.
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationCS491/691: Introduction to Aerial Robotics
CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationL09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms
L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationAUTOMOTIVE ENVIRONMENT SENSORS
AUTOMOTIVE ENVIRONMENT SENSORS Lecture 5. Localization BME KÖZLEKEDÉSMÉRNÖKI ÉS JÁRMŰMÉRNÖKI KAR 32708-2/2017/INTFIN SZÁMÚ EMMI ÁLTAL TÁMOGATOTT TANANYAG Related concepts Concepts related to vehicles moving
More informationBayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.
Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate
More informationParticle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics
Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error
More informationAnswers and expectations
Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationRobot Localization and Kalman Filters
Robot Localization and Kalman Filters Rudy Negenborn rudy@negenborn.net August 26, 2003 Outline Robot Localization Probabilistic Localization Kalman Filters Kalman Localization Kalman Localization with
More informationMathematical Formulation of Our Example
Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot
More informationSensor Tasking and Control
Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities
More informationRobotics. Mobile Robotics. Marc Toussaint U Stuttgart
Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand
More informationSLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada
SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM
More informationThe Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision
The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that
More informationLecture 6: Bayesian Inference in SDE Models
Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationMarkov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.
CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationEKF and SLAM. McGill COMP 765 Sept 18 th, 2017
EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:
More informationKalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein
Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic
More informationProbabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.
robabilistic Robotics Slides from Autonomous Robots Siegwart and Nourbaksh Chapter 5 robabilistic Robotics S. Thurn et al. Today Overview of probability Representing uncertainty ropagation of uncertainty
More informationProbabilistic Fundamentals in Robotics
Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino June 2011 Course Outline Basic mathematical framework Probabilistic
More informationRobotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.
Robotics Lecture 4: Probabilistic Robotics See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Sensors
More informationWhy do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationComputer Vision Group Prof. Daniel Cremers. 14. Sampling Methods
Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods
Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationProbability: Review. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
robabilit: Review ieter Abbeel UC Berkele EECS Man slides adapted from Thrun Burgard and Fo robabilistic Robotics Wh probabilit in robotics? Often state of robot and state of its environment are unknown
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robotic mapping Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic
More informationLocalization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman
Localization Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman Localization General robotic task Where am I? Techniques generalize to many estimation tasks System parameter
More informationSensor Fusion: Particle Filter
Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,
More informationE190Q Lecture 10 Autonomous Robot Navigation
E190Q Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2015 1 Figures courtesy of Siegwart & Nourbakhsh Kilobots 2 https://www.youtube.com/watch?v=2ialuwgafd0 Control Structures
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic
More informationHidden Markov Models. Vibhav Gogate The University of Texas at Dallas
Hidden Markov Models Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions
More informationControl. CSC752: Autonomous Robotic Systems. Ubbo Visser. March 9, Department of Computer Science University of Miami
Control CSC752: Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami March 9, 2017 Outline 1 Control system 2 Controller Images from http://en.wikipedia.org/wiki/feed-forward
More informationHidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1
Hidden Markov Models AIMA Chapter 15, Sections 1 5 AIMA Chapter 15, Sections 1 5 1 Consider a target tracking problem Time and uncertainty X t = set of unobservable state variables at time t e.g., Position
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state
More informationWhy do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationLocal Probabilistic Models: Continuous Variable CPDs
Local Probabilistic Models: Continuous Variable CPDs Sargur srihari@cedar.buffalo.edu 1 Topics 1. Simple discretizing loses continuity 2. Continuous Variable CPDs 3. Linear Gaussian Model Example of car
More informationIntroduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak
Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,
More informationResults: MCMC Dancers, q=10, n=500
Motivation Sampling Methods for Bayesian Inference How to track many INTERACTING targets? A Tutorial Frank Dellaert Results: MCMC Dancers, q=10, n=500 1 Probabilistic Topological Maps Results Real-Time
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationPROBABILISTIC REASONING OVER TIME
PROBABILISTIC REASONING OVER TIME In which we try to interpret the present, understand the past, and perhaps predict the future, even when very little is crystal clear. Outline Time and uncertainty Inference:
More informationMarkov Models and Hidden Markov Models
Markov Models and Hidden Markov Models Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA Markov Models We have already seen that an MDP provides
More informationSimultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo
Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Introduction SLAM asks the following question: Is it possible for an autonomous vehicle
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 18: HMMs and Particle Filtering 4/4/2011 Pieter Abbeel --- UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore
More informationBayesian Methods in Positioning Applications
Bayesian Methods in Positioning Applications Vedran Dizdarević v.dizdarevic@tugraz.at Graz University of Technology, Austria 24. May 2006 Bayesian Methods in Positioning Applications p.1/21 Outline Problem
More informationSampling Based Filters
Statistical Techniques in Robotics (16-831, F12) Lecture#03 (Monday September 5) Sampling Based Filters Lecturer: Drew Bagnell Scribes: M. Taylor, M. Shomin 1 1 Beam-based Sensor Model Many common sensors
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter
More informationCIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions
CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the
More informationAnnouncements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008
CS 88: Artificial Intelligence Fall 28 Lecture 9: HMMs /4/28 Announcements Midterm solutions up, submit regrade requests within a week Midterm course evaluation up on web, please fill out! Dan Klein UC
More informationHidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012
Hidden Markov Models Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Many slides courtesy of Dan Klein, Stuart Russell, or
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationVlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Probabilistic Map-based Localization (Kalman Filter) Chapter 5 (textbook) Based on textbook
More informationSimultaneous Localization and Mapping Techniques
Simultaneous Localization and Mapping Techniques Andrew Hogue Oral Exam: June 20, 2005 Contents 1 Introduction 1 1.1 Why is SLAM necessary?.................................. 2 1.2 A Brief History of SLAM...................................
More informationDynamic models 1 Kalman filters, linearization,
Koller & Friedman: Chapter 16 Jordan: Chapters 13, 15 Uri Lerner s Thesis: Chapters 3,9 Dynamic models 1 Kalman filters, linearization, Switching KFs, Assumed density filters Probabilistic Graphical Models
More informationArtificial Intelligence
Artificial Intelligence Roman Barták Department of Theoretical Computer Science and Mathematical Logic Summary of last lecture We know how to do probabilistic reasoning over time transition model P(X t
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More informationECE295, Data Assimila0on and Inverse Problems, Spring 2015
ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch
More informationBayesian Methods / G.D. Hager S. Leonard
Bayesian Methods Recall Robot Localization Given Sensor readings z 1, z 2,, z t = z 1:t Known control inputs u 0, u 1, u t = u 0:t Known model t+1 t, u t ) with initial 1 u 0 ) Known map z t t ) Compute
More informationProbabilistic Robotics
University of Rome La Sapienza Master in Artificial Intelligence and Robotics Probabilistic Robotics Prof. Giorgio Grisetti Course web site: http://www.dis.uniroma1.it/~grisetti/teaching/probabilistic_ro
More informationApproximate Inference
Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate
More informationAdvanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering
Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by
More informationFrom Bayes to Extended Kalman Filter
From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Hidden Markov Models Instructor: Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, and Anca. http://ai.berkeley.edu.]
More informationIntroduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping
Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz What is SLAM? Estimate the pose of a robot and the map of the environment
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 8: Importance Sampling 8.1 Importance Sampling Importance sampling
More informationChapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager
Chapter 9: Bayesian Methods CS 336/436 Gregory D. Hager A Simple Eample Why Not Just Use a Kalman Filter? Recall Robot Localization Given Sensor readings y 1, y 2, y = y 1: Known control inputs u 0, u
More informationBayesian Calibration of Simulators with Structured Discretization Uncertainty
Bayesian Calibration of Simulators with Structured Discretization Uncertainty Oksana A. Chkrebtii Department of Statistics, The Ohio State University Joint work with Matthew T. Pratola (Statistics, The
More informationExpectation Propagation in Dynamical Systems
Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex
More informationPart 1: Expectation Propagation
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud
More informationRecall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem
Recall from last time: Conditional probabilities Our probabilistic models will compute and manipulate conditional probabilities. Given two random variables X, Y, we denote by Lecture 2: Belief (Bayesian)
More informationContent.
Content Fundamentals of Bayesian Techniques (E. Sucar) Bayesian Filters (O. Aycard) Definition & interests Implementations Hidden Markov models Discrete Bayesian Filters or Markov localization Kalman filters
More informationCSE 473: Artificial Intelligence Spring 2014
CSE 473: Artificial Intelligence Spring 2014 Hidden Markov Models Hanna Hajishirzi Many slides adapted from Dan Weld, Pieter Abbeel, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationParallel Particle Filter in Julia
Parallel Particle Filter in Julia Gustavo Goretkin December 12, 2011 1 / 27 First a disclaimer The project in a sentence. workings 2 / 27 First a disclaimer First a disclaimer The project in a sentence.
More informationRao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors
Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors GRASP Laboratory University of Pennsylvania June 6, 06 Outline Motivation Motivation 3 Problem
More informationMiscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier
Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationLecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu
Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes
More information