Kalman Filters, Dynamic Bayesian Networks

Size: px
Start display at page:

Download "Kalman Filters, Dynamic Bayesian Networks"

Transcription

1 Course 16:198:520: Introduction To Artificial Intelligence Lecture 12 Kalman Filters, Dynamic Bayesian Networks Abdeslam Boularias Monday, November 16, / 40

2 Example: Tracking the trajectory of a rocket Objective: identify exactly where the rocket is at the current moment. Altitude True trajectory Observed trajectory Time 2 / 40

3 Example: Tracking the trajectory of a rocket Objective: identify exactly where the rocket is at the current moment. Observation model A rocket sends information about its current altitude obtained from sensors, such Inertial Measurement Units (IMU) and satellites. These measurements are not precise enough. Transition model If we know the velocities and accelerations of the rocket, we can predict its current altitude. These measurements are also not precise enough (wind, imperfect models). Problem: use both the observation model and the transition model to estimate the position of the rocket. 3 / 40

4 Example: Tracking the trajectory of a rocket Bayesian-network view of the problem X 0 X 1 X k X t E 1 E k E t State X t is the current position of the rocket. Its true value is unknown. Evidence E t is the estimated current position of the rocket according to the sensors. Given all the past and the current observations e 1:t = [e 1, e 2,... e t ], we want to compute a distribution P (X t e 1:t ) on the current state. 4 / 40

5 Filtering (state estimation) We have seen in the previous lecture how P (X t e 1:t ) could be calculated recursively by means of the forward procedure: P (X t e 1:t ) = P (e t X t ) x t 1 P (X t x t 1 )P (x t 1 e 1:t 1 ) x t P (e t x t ) x t 1 P (x t x t 1 )P (x t 1 e 1:t 1 ) Problem: In the rocket tracking problem, state X t corresponds to the altitude, which is a continous variable. There are infinitely many possible values of X t. 5 / 40

6 Filtering (state estimation) We have seen in the previous lecture how P (X t e 1:t ) could be calculated recursively by means of the forward procedure: P (X t e 1:t ) = P (e t X t ) x t 1 P (X t x t 1 )P (x t 1 e 1:t 1 ) x t P (e t x t ) x t 1 P (x t x t 1 )P (x t 1 e 1:t 1 ) Problem: In the rocket tracking problem, state X t corresponds to the altitude, which is a continous variable. There are infinitely many possible values of X t. Solution: We can derive exactly the same result by replacing sums with integrals. 6 / 40

7 Filtering (state estimation) We have seen in the previous lecture how P (X t e 1:t ) could be calculated recursively by means of the forward procedure: P (X t e 1:t ) = P (e t X t ) x t 1 P (X t x t 1 )P (x t 1 e 1:t 1 ) x t P (e t x t ) x t 1 P (x t x t 1 )P (x t 1 e 1:t 1 ) Problem: How can we calculate x t 1 P (X t x t 1 )P (x t 1 e 1:t 1 )? 7 / 40

8 Filtering (state estimation) We have seen in the previous lecture how P (X t e 1:t ) could be calculated recursively by means of the forward procedure: P (X t e 1:t ) = P (e t X t ) x t 1 P (X t x t 1 )P (x t 1 e 1:t 1 ) x t P (e t x t ) x t 1 P (x t x t 1 )P (x t 1 e 1:t 1 ) Problem: How can we calculate x t 1 P (X t x t 1 )P (x t 1 e 1:t 1 )? Solution: Assume P (X t X t 1 ) and P (E t X t ) are linear Gaussian. 8 / 40

9 Why do we use Gaussian distributions? Gaussian distributions are self-conjugate: If the prior P (X) is Gaussian and the evidence P (Y X) is Gaussian, then the posterior P (X Y ) = P (Y X)P (X) P (Y ) = P (Y X)P (X) P (Y x)p (x) x is also Gaussian. 9 / 40

10 Kalman Filter Independently invented by P. Swerling, T. Thiele and R. Kalman in the 1950 s. First used for tracking rockets in the Apollo 11 program. Widely used for tracking moving objects, stock market data, etc. If the current distribution P (x t 1 e 1:t 1 ) is Gaussian and the transition model P (X t x t 1 ) is linear Gaussian, then the one-step predicted distribution given by P (X t e 1:t 1 ) = P (X t, x t 1 e 1:t 1 )dx t 1 x t 1 = P (X t x t 1 )P (x t 1 e 1:t 1 )dx t 1, x t 1 is also Gaussian. 10 / 40

11 Kalman Filter If the current distribution P (x t 1 e 1:t 1 ) is Gaussian and the transition model P (X t x t 1 ) is linear Gaussian, then the one-step predicted distribution given by P (X t e 1:t 1 ) = P (X t x t 1 )P (x t 1 e 1:t 1 ), x t 1 is also Gaussian. If the prediction P (X t e 1:t 1 ) is Gaussian and the sensor model P (e t X t ) is linear Gaussian, then, after conditioning on the new evidence, the updated distribution P (X t e 1:t ) = P (e t X t )P (X t e 1:t 1 ) x t P (e t x t )P (x t e 1:t 1 ) is also Gaussian. 11 / 40

12 Kalman Filter A multivariate Gaussian distribution is uniquely identified by a mean vector µ and covariance matrix Σ. Reminder Let X = (X 1, X 2,..., X n ) be a multivariate Gaussian with mean µ and covariance matrix Σ. ) µ = (mean of X 1, mean of X 2,..., mean of X n Covariance(X 1, X 1 ) Covariance(X 1, X 2 )... Covariance(X 1, X n ) Covariance(X Σ = 2, X 1 ) Covariance(X 2, X 2 )... Covariance(X 2, X n ).... Covariance(X n, X 1 ) Covariance(X n, X 2 )... Covariance(X n, X n ) Remember, Covariance(X i, X j ) = x i x j (x i mean(x i ))(x j mean(x j ))p(x i, x j )dx i dx j 12 / 40

13 Kalman Filter The FORWARD operator for Kalman filtering takes a Gaussian forward message f 1:t 1, specified by a mean µ t 1 and covariance matrix Σ t 1, and produces a new Gaussian forward message f 1:t, specified by a mean µ t and covariance matrix Σ t. Remember f 1:t 1 = (µ t 1, Σ t 1 ) evidence et f 1:t = (µ t, Σ t ) f 1:t = (µ t, Σ t ) defines the state distribution at time t, P (X t e 1:t ) = N(µ t, Σ t ). 13 / 40

14 Linear Gaussian distribution Let X R k and Y R l, and F be a k l matrix. P (X Y ) is linear-gaussian if P (X Y ) = N(F Y, Σ) 1 ( = (2π) k Σ exp 1 ) 2 (X F Y )T Σ 1 (X F Y ), where Σ is a covariance matrix, and Σ is the determinant of Σ. Note: P here is a density function. The probability distribution is obtained by integrating P within a given integral. The density function is typically denoted by f, we are using a different notation for clarity. 14 / 40

15 Example: Tracking a Rocket s Altitude Let s consider a simple example where: The state is a single variable X t R that corresponds to the true altitude of a rocket. Assume that the velocity of the rocket is constant and equal to v. Can we say that X t+1 = X t + v? What about the wind? what about air friction? what about small accelerations/decelerations that we didn t account for? Transition (dynamics) model ( P (X t+1 X t ) = N(X t + v, σ 2 x ) = α exp 1 2 (X t+1 (X t + v)) 2 σ x 2 ). 15 / 40

16 Rocket Altitude Example Prior distribution We may or may not know the exact initial altitude X 0 of the rocket. For example, we are tracking an enemy rocket and we do not know its exact initial altitude when it was first spotted in the sky. In general, ( P (X 0 ) = N(µ 0, σ 2 0 ) = α exp 1 2 (X 0 µ 0 ) 2 σ 0 2 ). 16 / 40

17 Rocket Altitude Example Observation (sensor) model We do not have access to the true altitude X t. What we observe is a variable Z t R that corresponds to a measurement of the rocket s altitude. Can we say that Z t = X t? No, the observation Z t is a noised estimate of X t, ( P (Z t X t ) = N(X t, σ 2 z ) = α exp 1 2 (Z t X t ) 2 Note: in dynamical systems, we often use Z t instead of E t to denote an observation (an evidence). σ z 2 ). 17 / 40

18 Kalman Filter Steps Wikipedia 18 / 40

19 Rocket Altitude Example Now, given the prior P (X 0 ), how can we compute P (X 1 )? We need to take into account the following two facts 1 The rocket moved from X 0 to X 1 with velocity v. 2 The measured value of X 1 is Z / 40

20 Rocket Altitude Example Now, given the prior P (X 0 ), the one-step predicted distribution P (X 1 ) = = α = α = α = α exp P (X 1 x 0 )P (x 0 )dx 0 ( exp 1 2 (X 1 (x 0 + v)) 2 σ x 2 ( exp 1 (X 1 (x 0 + v)) 2 2 ( exp 1 2 ( 1 2 ( = α exp 1 2 ) ( exp (x 0 µ 0 ) 2 σ 2 x 2 σ 2 0 (x 0 µ 0 ) 2 σ 0 2 ) dx 0 σ 0 2 (X 1 (x 0 + v)) 2 + σ x 2 (x 0 µ 0 ) 2 (X 1 (µ 0 + v)) 2 σ x 2 + σ 0 2 (X 1 (µ 0 + v)) 2 ) σ 2 x + σ 2 0 ) dx 0 ) σ 2 dx x σ ) ( exp 1 (x 0 a) 2 ) 2 b 2 dx 0 }{{} constant 20 / 40

21 Rocket Altitude Example To complete the update step, we need to condition on the observation at the first time step, namely, Z 1. P (X 1 Z 1 ) = αp (Z 1 X 1 )P (X 1 ) ( = α exp 1 (Z 1 X 1 ) 2 2 σ 2 z ( = α exp 1 (X 1 µ 1 ) 2 ) 2 σ 2 1 = N(µ 1, σ 2 1 ). with ) ( exp 1 2 µ 1 = (σ σ x 2 )Z 1 + σ z 2 (µ 0 + v) σ σ x 2 + σ z 2, σ 1 2 (X 1 (µ 0 + v)) 2 ) σ 2 x + σ 2 0 = (σ σ x 2 )σ z 2 σ σ x 2 + σ z / 40

22 Rocket Altitude Example with P (X 1 Z 1 ) = N(µ 1, σ 1 2 ). µ 1 = (σ σ x 2 )Z 1 + σ z 2 (µ 0 + v) σ σ x 2 + σ z 2, σ 1 2 = (σ σ x 2 )σ z 2 σ σ x 2 + σ z 2. Notice lim σ0,σ x 0 µ 1 = µ 0 + v. lim σz µ 1 = µ 0 + v. lim σx µ 1 = Z 1. lim σz 0 µ 1 = Z / 40

23 Rocket Altitude Example P(x) P(x z 1 = 2.5) 0.25 P(x 0 ) P(x 1 ) *z x position Stages in the Kalman filter update cycle: Notice how the variance of P (X 1 ) is higher than the prior variance of P (X 0 ), this variance is brought down by the new evidence Z. 23 / 40

24 General Case Transition (dynamics) model P (X t+1 X t ) = N(F X t, Σ x ), where F is a matrix and X t, X t+1 are vectors. Observation (sensor) model P (Z t X t ) = N(HX t, Σ z ) where H is a matrix and X t, Z t are vectors. Prior distribution P (X 0 ) = N(µ 0, Σ 0 ) 24 / 40

25 General Case Update equations for the mean and covariance µ t+1 = F µ t + K t+1 (Z t+1 HF µ t ) Σ t+1 = (I K t+1 H)(F Σ t F T + Σ x ), where K t+1 = (F Σ t F T + Σ x )H T (H(F Σ t F T + Σ x )H T + Σ z ) 1 25 / 40

26 Example D filtering true observed smoothed D smoothing true observed smoothed Y 9 Y X X (a) (b) (a) Results of Kalman filtering for an object moving on the X-Y plane, showing the true trajectory (left to right), a series of noisy observations, and the trajectory estimated by Kalman filtering. Variance in the position estimate is indicated by the ovals. (b) The results of Kalman smoothing for the same observation sequence. 26 / 40

27 Limitations of Kalman Filters: Multi-modal state distributions To predict the next position of the bird, we know that the bird will be on the left or on the right of the tree (obstacle). Using Kalman filters, the predicted state distribution will be a Gaussian. 27 / 40

28 Limitations of Kalman Filters: Multi-modal state distributions A two-modal Gaussian is more appropriate. 28 / 40

29 Limitations of Kalman Filters: Multi-modal state distributions Solution: switching Kalman filter S t S t+1 X t X t+1 Z t Z t+1 29 / 40

30 Dynamic Bayesian Networks (DBN) The temporal models that we saw so far (Markov Chains, Hidden Markov Models, Kalman Filters) are special cases of Dynamic Bayesian Networks (DBN). X 0 X 1 X k X t E 1 E k E t A Dynamic Bayesian Network is a Bayesian Network where nodes correspond to variables at different times. 30 / 40

31 Rocket Altitude Example Xt Xt+1 Xt Xt+1 Zt Zt+1 Dynamic Bayesian Network for a linear dynamical system with position X t, velocity Ẋt, and position measurement Z t. 31 / 40

32 7KLVWHFKQLTXHZDVLQWURGXFHGIRUIDOOPRQLWRULQJZDONLQJDQGIDOOGLDJQRVLV*LYHQHYLGHQFHIURPVHQVRUREVHUYDWLRQVWKH PRGHO RXWSXWV EHOLHIV DERXW WKH FXUUHQW ZDONLQJ VWDWXV DQG PDNHV SUHGLFWLRQV UHJDUGLQJ IXWXUH IDOOV 7KLV V\VWHP \LHOGV D JRRGSHUIRUPDQFHIRUVSDUVHO\UHFHLYHGREVHUYDWLRQVLQWLPHDQGFDQEHXVHGIRUIRUHFDVWLQJLQVXFKVLWXDWLRQV Example of a Dynamic Bayesian Network '%1VZLWKXQLIRUPVWUXFWXUH A Dynamic Bayesian Network is a Bayesian Network where nodes correspond to variables at different times. 7KLV JURXS FRQWDLQV QHWZRUNV WKDW KDYH LGHQWLFDO VWUXFWXUH IRU HYHU\ WLPH VOLFH DQG LGHQWLFDO WHPSRUDO FRQGLWLRQDO GHSHQGHQFLHV DUFV EHWZHHQ WLPH VOLFHV 7KLV PHDQV WKDW FRQGLWLRQDO GHSHQGHQFLHV DQG VWDWHV LQ RQH WLPH VOLFH DUH UHSUHVHQWHGDVDVWDWLF%1'%1LVEXLOWRYHUWKHVH%1VE\PXOWLSO\LQJWKHPIRUHDFKWLPHVOLFHDQGE\DGGLQJDUFVEHWZHHQ VWDWHVIURPWZRFRQVHFXWLYHWLPHVOLFHVQRGHVLQ%1LIWKH\DUHWHPSRUDOO\FRUUHODWHG7KLVLVGHSLFWHGLQ)LJXUH 7HPSRUDO GHSHQGHQFLHV 6WDWLF%1 7LPHVOLFHW 7LPHVOLFHW )LJXUH([WHQGLQJ%1VWRZDUG'%1V Extending a Bayesian Network to a Dynamic Bayesian Network from V. Mihajlovic and M. Petkovic. Dynamic Bayesian Networks: A state of the art 32 / 40

33 Example of a Dynamic Bayesian Networks Example Tracking the position of a mobile robot Hidden variables X t : true position at time t. Ẋ t : true velocity at time t. Battery t : true battery charge at time t. Battery 0 X 0 txx 0 BMeter 1 Battery 1 X 1 X 1 Observations (evidence) Z t : GPS-provided position at time t. BMeter t : measured battery charge at time t. Z 1 Two time-slices of a Dynamic Bayesian Network 33 / 40

34 Exact Inference in Dynamic Bayesian Networks If we unroll a Dynamic Bayesian Networks for T time-steps (i.e we graphically represent all the variables from time 1 until T ), we obtain a large ordinary Bayesian Network. Therefore, any inference algorithm for Bayesian Network can also be used for Dynamic Bayesian Networks. For example, we can use the variable elimination algorithm. A recursive forward procedure can be used to implement the variable elimination algorithm, since we know that a variable at time t cannot be the parent (cause) of anything at time k < t. Therefore, we don t have to unroll the Dynamic Bayesian Network. 34 / 40

35 Approximate Inference in Dynamic Bayesian Networks We can unroll the Dynamic Bayesian Networks for T time-steps and use likelihood weighting. Remember: in likelihood weighting, we generate samples, each sample is an assignment of values to all variables, and we weight each sample by its likelihood. Problem: the size of each sample would be (Number of variables in one time-slice) T. Solution: similarly to the recursive forward procedure, generate samples at each time-step and erase the previous ones. 35 / 40

36 Approximate Inference in Dynamic Bayesian Networks: Particle Filtering Particles A particle is a sample: an assignment of values to all the variables in a particular time-step. 36 / 40

37 Approximate Inference in Dynamic Bayesian Networks: Particle Filtering Particle Filtering First, a population of N initial-state samples is created by sampling from the prior distribution P (X 0 ). Then the update cycle is repeated for each time step: 1 Each sample is propagated forward by sampling the next state value x t+1 given the current value x t for the sample, based on the transition model P (X t+1 x t ). 37 / 40

38 Approximate Inference in Dynamic Bayesian Networks: Particle Filtering Particle Filtering First, a population of N initial-state samples is created by sampling from the prior distribution P (X 0 ). Then the update cycle is repeated for each time step: 1 Each sample is propagated forward by sampling the next state value x t+1 given the current value x t for the sample, based on the transition model P (X t+1 x t ). 2 Each sample is weighted by the likelihood it assigns to the new evidence, P (e t+1 x t+1 ). 38 / 40

39 Approximate Inference in Dynamic Bayesian Networks: Particle Filtering Particle Filtering First, a population of N initial-state samples is created by sampling from the prior distribution P (X 0 ). Then the update cycle is repeated for each time step: 1 Each sample is propagated forward by sampling the next state value x t+1 given the current value x t for the sample, based on the transition model P (X t+1 x t ). 2 Each sample is weighted by the likelihood it assigns to the new evidence, P (e t+1 x t+1 ). 3 The population is resampled to generate a new population of N samples. Each new sample is selected from the current population; the probability that a particular sample is selected is proportional to its weight. The new samples are unweighted. 39 / 40

40 Particle Filtering true Rain t Rain t+1 Rain t+1 Rain t+1 false (a) Propagate (b) Weight (c) Resample The particle filtering update cycle for the umbrella DBN with N =10, showing the sample populations of each state. Each sample is weighted by its likelihood for the observation, as indicated by the size of the circles. 40 / 40

PROBABILISTIC REASONING OVER TIME

PROBABILISTIC REASONING OVER TIME PROBABILISTIC REASONING OVER TIME In which we try to interpret the present, understand the past, and perhaps predict the future, even when very little is crystal clear. Outline Time and uncertainty Inference:

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Roman Barták Department of Theoretical Computer Science and Mathematical Logic Summary of last lecture We know how to do probabilistic reasoning over time transition model P(X t

More information

Temporal probability models. Chapter 15, Sections 1 5 1

Temporal probability models. Chapter 15, Sections 1 5 1 Temporal probability models Chapter 15, Sections 1 5 Chapter 15, Sections 1 5 1 Outline Time and uncertainty Inference: filtering, prediction, smoothing Hidden Markov models Kalman filters (a brief mention)

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Networks BY: MOHAMAD ALSABBAGH Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic

More information

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1 Hidden Markov Models AIMA Chapter 15, Sections 1 5 AIMA Chapter 15, Sections 1 5 1 Consider a target tracking problem Time and uncertainty X t = set of unobservable state variables at time t e.g., Position

More information

Hidden Markov Models (recap BNs)

Hidden Markov Models (recap BNs) Probabilistic reasoning over time - Hidden Markov Models (recap BNs) Applied artificial intelligence (EDA132) Lecture 10 2016-02-17 Elin A. Topp Material based on course book, chapter 15 1 A robot s view

More information

Temporal probability models. Chapter 15

Temporal probability models. Chapter 15 Temporal probability models Chapter 15 Outline Time and uncertainty Inference: filtering, prediction, smoothing Hidden Markov models Kalman filters (a brief mention) Dynamic Bayesian networks Particle

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 18: HMMs and Particle Filtering 4/4/2011 Pieter Abbeel --- UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Wei Xu Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley.] Pacman Sonar (P4) [Demo: Pacman Sonar

More information

Hidden Markov models 1

Hidden Markov models 1 Hidden Markov models 1 Outline Time and uncertainty Markov process Hidden Markov models Inference: filtering, prediction, smoothing Most likely explanation: Viterbi 2 Time and uncertainty The world changes;

More information

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example CS 88: Artificial Intelligence Fall 29 Lecture 9: Hidden Markov Models /3/29 Announcements Written 3 is up! Due on /2 (i.e. under two weeks) Project 4 up very soon! Due on /9 (i.e. a little over two weeks)

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

More information

9 Forward-backward algorithm, sum-product on factor graphs

9 Forward-backward algorithm, sum-product on factor graphs Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous

More information

Local Positioning with Parallelepiped Moving Grid

Local Positioning with Parallelepiped Moving Grid Local Positioning with Parallelepiped Moving Grid, WPNC06 16.3.2006, niilo.sirola@tut.fi p. 1/?? TA M P E R E U N I V E R S I T Y O F T E C H N O L O G Y M a t h e m a t i c s Local Positioning with Parallelepiped

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Homework 3 out tonight Start early!! Announcements Project milestones due today Please email to TAs 2 Parameter learning

More information

CS 188: Artificial Intelligence Spring 2009

CS 188: Artificial Intelligence Spring 2009 CS 188: Artificial Intelligence Spring 2009 Lecture 21: Hidden Markov Models 4/7/2009 John DeNero UC Berkeley Slides adapted from Dan Klein Announcements Written 3 deadline extended! Posted last Friday

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

Dynamic Bayesian Networks and Hidden Markov Models Decision Trees

Dynamic Bayesian Networks and Hidden Markov Models Decision Trees Lecture 11 Dynamic Bayesian Networks and Hidden Markov Models Decision Trees Marco Chiarandini Deptartment of Mathematics & Computer Science University of Southern Denmark Slides by Stuart Russell and

More information

Probabilistic Robotics

Probabilistic Robotics University of Rome La Sapienza Master in Artificial Intelligence and Robotics Probabilistic Robotics Prof. Giorgio Grisetti Course web site: http://www.dis.uniroma1.it/~grisetti/teaching/probabilistic_ro

More information

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions. CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called

More information

Bayesian Methods in Artificial Intelligence

Bayesian Methods in Artificial Intelligence WDS'10 Proceedings of Contributed Papers, Part I, 25 30, 2010. ISBN 978-80-7378-139-2 MATFYZPRESS Bayesian Methods in Artificial Intelligence M. Kukačka Charles University, Faculty of Mathematics and Physics,

More information

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg Temporal Reasoning Kai Arras, University of Freiburg 1 Temporal Reasoning Contents Introduction Temporal Reasoning Hidden Markov Models Linear Dynamical Systems (LDS) Kalman Filter 2 Temporal Reasoning

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Hidden Markov Models Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro

More information

Approximate Inference

Approximate Inference Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate

More information

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Hidden Markov models

Hidden Markov models Hidden Markov models Charles Elkan November 26, 2012 Important: These lecture notes are based on notes written by Lawrence Saul. Also, these typeset notes lack illustrations. See the classroom lectures

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Particle Filters and Applications of HMMs Instructor: Wei Xu Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley.] Recap: Reasoning

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Particle Filters and Applications of HMMs Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials

More information

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008 CS 88: Artificial Intelligence Fall 28 Lecture 9: HMMs /4/28 Announcements Midterm solutions up, submit regrade requests within a week Midterm course evaluation up on web, please fill out! Dan Klein UC

More information

CS491/691: Introduction to Aerial Robotics

CS491/691: Introduction to Aerial Robotics CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

PILCO: A Model-Based and Data-Efficient Approach to Policy Search

PILCO: A Model-Based and Data-Efficient Approach to Policy Search PILCO: A Model-Based and Data-Efficient Approach to Policy Search (M.P. Deisenroth and C.E. Rasmussen) CSC2541 November 4, 2016 PILCO Graphical Model PILCO Probabilistic Inference for Learning COntrol

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

CSE 473: Artificial Intelligence Spring 2014

CSE 473: Artificial Intelligence Spring 2014 CSE 473: Artificial Intelligence Spring 2014 Hidden Markov Models Hanna Hajishirzi Many slides adapted from Dan Weld, Pieter Abbeel, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Markov Models and Hidden Markov Models

Markov Models and Hidden Markov Models Markov Models and Hidden Markov Models Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA Markov Models We have already seen that an MDP provides

More information

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Hidden Markov Models Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Many slides courtesy of Dan Klein, Stuart Russell, or

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

2-Step Temporal Bayesian Networks (2TBN): Filtering, Smoothing, and Beyond Technical Report: TRCIM1030

2-Step Temporal Bayesian Networks (2TBN): Filtering, Smoothing, and Beyond Technical Report: TRCIM1030 2-Step Temporal Bayesian Networks (2TBN): Filtering, Smoothing, and Beyond Technical Report: TRCIM1030 Anqi Xu anqixu(at)cim(dot)mcgill(dot)ca School of Computer Science, McGill University, Montreal, Canada,

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

COS Lecture 16 Autonomous Robot Navigation

COS Lecture 16 Autonomous Robot Navigation COS 495 - Lecture 16 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal

More information

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas Hidden Markov Models Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1

More information

Extensions of Bayesian Networks. Outline. Bayesian Network. Reasoning under Uncertainty. Features of Bayesian Networks.

Extensions of Bayesian Networks. Outline. Bayesian Network. Reasoning under Uncertainty. Features of Bayesian Networks. Extensions of Bayesian Networks Outline Ethan Howe, James Lenfestey, Tom Temple Intro to Dynamic Bayesian Nets (Tom Exact inference in DBNs with demo (Ethan Approximate inference and learning (Tom Probabilistic

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the

More information

Sequential Bayesian Updating

Sequential Bayesian Updating BS2 Statistical Inference, Lectures 14 and 15, Hilary Term 2009 May 28, 2009 We consider data arriving sequentially X 1,..., X n,... and wish to update inference on an unknown parameter θ online. In a

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object

More information

Inference and estimation in probabilistic time series models

Inference and estimation in probabilistic time series models 1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#20 (Monday November 12) Gaussian Processes

Statistical Techniques in Robotics (16-831, F12) Lecture#20 (Monday November 12) Gaussian Processes Statistical Techniques in Robotics (6-83, F) Lecture# (Monday November ) Gaussian Processes Lecturer: Drew Bagnell Scribe: Venkatraman Narayanan Applications of Gaussian Processes (a) Inverse Kinematics

More information

Expectation Propagation Algorithm

Expectation Propagation Algorithm Expectation Propagation Algorithm 1 Shuang Wang School of Electrical and Computer Engineering University of Oklahoma, Tulsa, OK, 74135 Email: {shuangwang}@ou.edu This note contains three parts. First,

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1 Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters Lecturer: Drew Bagnell Scribe:Greydon Foil 1 1 Gauss Markov Model Consider X 1, X 2,...X t, X t+1 to be

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

Chapter 05: Hidden Markov Models

Chapter 05: Hidden Markov Models LEARNING AND INFERENCE IN GRAPHICAL MODELS Chapter 05: Hidden Markov Models Dr. Martin Lauer University of Freiburg Machine Learning Lab Karlsruhe Institute of Technology Institute of Measurement and Control

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

CS 7180: Behavioral Modeling and Decision- making in AI

CS 7180: Behavioral Modeling and Decision- making in AI CS 7180: Behavioral Modeling and Decision- making in AI Bayesian Networks for Dynamic and/or Relational Domains Prof. Amy Sliva October 12, 2012 World is not only uncertain, it is dynamic Beliefs, observations,

More information

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes

Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes Lecturer: Drew Bagnell Scribe: Venkatraman Narayanan 1, M. Koval and P. Parashar 1 Applications of Gaussian

More information

p L yi z n m x N n xi

p L yi z n m x N n xi y i z n x n N x i Overview Directed and undirected graphs Conditional independence Exact inference Latent variables and EM Variational inference Books statistical perspective Graphical Models, S. Lauritzen

More information

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22. Introduction to Artificial Intelligence V22.0472-001 Fall 2009 Lecture 15: Bayes Nets 3 Midterms graded Assignment 2 graded Announcements Rob Fergus Dept of Computer Science, Courant Institute, NYU Slides

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016 Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the

More information

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models.  Domingo Gallardo Robots Autónomos 2. Bayesian Estimation and sensor models Domingo Gallardo Depto. CCIA http://www.rvg.ua.es/master/robots References Recursive State Estimation: Thrun, chapter 2 Sensor models and robot

More information

Temporal Models.

Temporal Models. Temporal Models www.cs.wisc.edu/~page/cs760/ Goals for the lecture (with thanks to Irene Ong, Stuart Russell and Peter Norvig, Jakob Rasmussen, Jeremy Weiss, Yujia Bao, Charles Kuang, Peggy Peissig, and

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

Introduction to Artificial Intelligence (AI)

Introduction to Artificial Intelligence (AI) Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 9 Oct, 11, 2011 Slide credit Approx. Inference : S. Thrun, P, Norvig, D. Klein CPSC 502, Lecture 9 Slide 1 Today Oct 11 Bayesian

More information

Kalman Filter. Man-Wai MAK

Kalman Filter. Man-Wai MAK Kalman Filter Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S. Gannot and A. Yeredor,

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Introduction to Probabilistic Graphical Models: Exercises

Introduction to Probabilistic Graphical Models: Exercises Introduction to Probabilistic Graphical Models: Exercises Cédric Archambeau Xerox Research Centre Europe cedric.archambeau@xrce.xerox.com Pascal Bootcamp Marseille, France, July 2010 Exercise 1: basics

More information

Bayesian Machine Learning - Lecture 7

Bayesian Machine Learning - Lecture 7 Bayesian Machine Learning - Lecture 7 Guido Sanguinetti Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh gsanguin@inf.ed.ac.uk March 4, 2015 Today s lecture 1

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the

More information

What you need to know about Kalman Filters

What you need to know about Kalman Filters Readings: K&F: 4.5, 12.2, 12.3, 12.4, 18.1, 18.2, 18.3, 18.4 Switching Kalman Filter Dynamic Bayesian Networks Graphical Models 10708 Carlos Guestrin Carnegie Mellon University November 27 th, 2006 1 What

More information

LEARNING DYNAMIC SYSTEMS: MARKOV MODELS

LEARNING DYNAMIC SYSTEMS: MARKOV MODELS LEARNING DYNAMIC SYSTEMS: MARKOV MODELS Markov Process and Markov Chains Hidden Markov Models Kalman Filters Types of dynamic systems Problem of future state prediction Predictability Observability Easily

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Idea: probabilities samples Get probabilities from samples: X count x 1 n 1. x k total. n k m X probability x 1. n 1 /m. x k n k /m If we could sample from a variable s (posterior)

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Lecture 4: State Estimation in Hidden Markov Models (cont.)

Lecture 4: State Estimation in Hidden Markov Models (cont.) EE378A Statistical Signal Processing Lecture 4-04/13/2017 Lecture 4: State Estimation in Hidden Markov Models (cont.) Lecturer: Tsachy Weissman Scribe: David Wugofski In this lecture we build on previous

More information

Reasoning Under Uncertainty Over Time. CS 486/686: Introduction to Artificial Intelligence

Reasoning Under Uncertainty Over Time. CS 486/686: Introduction to Artificial Intelligence Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence 1 Outline Reasoning under uncertainty over time Hidden Markov Models Dynamic Bayes Nets 2 Introduction So far we

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Particle Filter for Localization Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, and the book Probabilistic Robotics from Thurn et al.

More information

Particle lter for mobile robot tracking and localisation

Particle lter for mobile robot tracking and localisation Particle lter for mobile robot tracking and localisation Tinne De Laet K.U.Leuven, Dept. Werktuigkunde 19 oktober 2005 Particle lter 1 Overview Goal Introduction Particle lter Simulations Particle lter

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information