Lecture 4: Extended Kalman filter and Statistically Linearized Filter

Size: px
Start display at page:

Download "Lecture 4: Extended Kalman filter and Statistically Linearized Filter"

Transcription

1 Lecture 4: Extended Kalman filter and Statistically Linearized Filter Department of Biomedical Engineering and Computational Science Aalto University February 17, 2011

2 Contents 1 Overview of EKF 2 Linear Approximations of Non-Linear Transforms 3 Extended Kalman Filter 4 Properties of Extended Kalman Filter 5 Statistically Linearized Filter 6 Properties of Statistically Linearized Filter 7 Summary

3 EKF Filtering Model Basic EKF filtering model is of the form: x k = f(x k 1 )+q k 1 y k = h(x k )+r k, x k R n is the state y k R m is the measurement q k 1 N(0, Q k 1 ) is the Gaussian process noise r k N(0, R k ) is the Gaussian measurement noise f( ) is the dynamic model function h( ) is the measurement model function.

4 Bayesian Optimal Filtering Equations The EKF model is clearly a special case of probabilistic state space models with p(x k x k 1 ) = N(x k f(x k 1 ), Q k 1 ) p(y k x k ) = N(y k h(x k ), R k ) Recall the formal optimal filtering solution: p(x k y 1:k 1 ) = p(x k x k 1 ) p(x k 1 y 1:k 1 ) dx k 1 p(x k y 1:k ) = 1 Z k p(y k x k ) p(x k y 1:k 1 ) No closed form solution for non-linear f and h.

5 The Idea of Extended Kalman Filter In EKF, the non-linear functions are linearized as follows: f(x) f(m)+f x (m)(x m) h(x) h(m)+h x (m)(x m) where x N(m, P), and F x, H x are the Jacobian matrices of f, h, respectively. Only the first terms in linearization contribute to the approximate means of the functions f and h. The second term has zero mean and defines the approximate covariances of the functions. Let s take a closer look at transformations of this kind.

6 Linear Approximations of Non-Linear Transforms [1/4] Consider the transformation of x into y: x N(m, P) y = g(x). The probability density of y is now non-gaussian: p(y) = J(y) N(g 1 (y) m, P), Taylor series expansion of g on mean m: g(x) = g(m+δx) = g(m)+g x (m)δx where δx = x m. + i 1 2 δxt G (i) xx(m)δx e i +...

7 Linear Approximations of Non-Linear Transforms [2/4] First order, that is, linear approximation: g(x) g(m)+g x (m)δx Taking expectations on both sides gives approximation of the mean: E[g(x)] g(m) For covariance we get the approximation: Cov[g(x)] = E [(g(x) E[g(x)]) (g(x) E[g(x)]) T] E [(g(x) g(m)) (g(x) g(m)) T] G x (m) P G T x(m)

8 Linear Approximations of Non-Linear Transforms [3/4] In EKF we will need the joint covariance of x and g(x)+q, where q N(0, Q). Consider the pair of transformations x N(m, P) q N(0, Q) y 1 = x y 2 = g(x)+q. Keeping q fixed and applying the linear approximation gives [( ) ] ( ) x m E q g(x)+q g(m)+q [( ) ] ( x P P G Cov q T x (m) ) g(x)+q G x (m) P G x (m) P G T x(m)

9 Linear Approximations of Non-Linear Transforms [4/4] Taking expectation w.r.t. random variable q just adds Q to the lower right block of covariance, thus we get: Linear Approximation of Non-Linear Transform The linear Gaussian approximation to the joint distribution of x and y = g(x)+q, where x N(m, P) and q N(0, Q) is ( ) (( ) ( )) x m P CL N,, y where µ L = g(m) µ L C T L S L S L = G x (m) P G T x(m)+q C L = P G T x(m).

10 Derivation of EKF [1/4] Assume that the filtering distribution of previous step is Gaussian p(x k 1 y 1:k 1 ) N(x k 1 m k 1, P k 1 ) The joint distribution of x k 1 and x k = f(x k 1 )+q k 1 is non-gaussian, but can be approximated linearly as ([ ] ) xk 1 p(x k 1, x k, y 1:k 1 ) N m, P, where ( ) m mk 1 = f(m k 1 ) ( P P = k 1 F x (m k 1 ) P k 1 x k P k 1 F T x (m ) k 1) F x (m k 1 ) P k 1 F T x (m. k 1)+Q k 1

11 Derivation of EKF [2/4] Recall that if x and y have the joint Gaussian probability density ( ) (( ) ( )) x a A C N, y b C T, B then y N(b, B) Thus, the approximate predicted distribution of x k given y 1:k 1 is Gaussian with moments m k = f(m k 1) P k = F x(m k 1 ) P k 1 F T x (m k 1)+Q k 1

12 Derivation of EKF [3/4] The joint distribution of x k and y k = h(x k )+r k is also non-gaussian, but by linear approximation we get ([ ] ) xk p(x k, y k y 1:k 1 ) N m, P, where ( m m ) = k h(m k ( ) P P = k P k HT x(m k ) ) H x (m k ) P k H x (m k ) P k HT x(m k )+R k y k

13 Derivation of EKF [4/4] Recall that if then Thus we get where ( ) (( ) ( )) x a A C N, y b C T, B x y N(a+C B 1 (y b), A C B 1 C T ). p(x k y k, y 1:k 1 ) N(x k m k, P k ), m k = m k + P k HT x (H x P k HT x + R k) 1 [y k h(m k )] P k = P k P k HT x (H x P k HT x + R k) 1 H x P k

14 EKF Equations Extended Kalman filter Prediction: Update: m k = f(m k 1) P k = F x(m k 1 ) P k 1 F T x (m k 1)+Q k 1. v k = y k h(m k ) S k = H x (m k ) P k HT x (m k )+R k K k = P k HT x(m k ) S 1 k m k = m k + K k v k P k = P k K k S k K T k.

15 EKF Example [1/2] Pendulum with mass m = 1, pole length L = 1 and random force w(t): d 2 θ = g sin(θ)+w(t). dt2 In state space form: ( ) ( d θ dθ/dt = dt dθ/dt g sin(θ) ) ( ) 0 + w(t) Assume that we measure the x-position: y k = sin(θ(t k ))+r k,

16 EKF Example [2/2] If we define state as x = (θ, dθ/dt), by Euler integration with time step t we get ( ) ( x 1 k x 1 xk 2 = k 1 + x2 k 1 t ) ( ) 0 xk 1 2 g sin(x1 k 1 }{{ ) t + q k 1 } f(x k 1 ) y k = sin(xk 1 }{{} ) +r k, h(x k ) The required Jacobian matrices are: ( ) 1 t F x (x) = g cos(x 1, H ) t 1 x (x) = ( cos(x 1 ) 0 )

17 Advantages of EKF Almost same as basic Kalman filter, easy to use. Intuitive, engineering way of constructing the approximations. Works very well in practical estimation problems. Computationally efficient. Theoretical stability results well available.

18 Limitations of EKF Does not work in considerable non-linearities. Only Gaussian noise processes are allowed. Measurement model and dynamic model functions need to be differentiable. Computation and programming of Jacobian matrices can be quite error prone.

19 The Idea of Statistically Linearized Filter In SLF, the non-linear functions are statistically linearized as follows: where x N(m, P). f(x) b f + A f (x m) h(x) b h + A h (x m) The parameters b f, A f and b h, A h are chosen to minimize the mean squared errors of the form MSE f (b f, A f ) = E[ f(x) b f A f δx) 2 ] MSE h (b h, A h ) = E[ h(x) b h A h δx 2 ] where δx = x m. Describing functions of the non-linearities with Gaussian input.

20 Statistical Linearization of Non-Linear Transforms [1/4] Again, consider the transformations x N(m, P) y = g(x). Form linear approximation to the transformation: where δx = x m. g(x) b+aδx, Instead of using the Taylor series approximation, we minimize the mean squared error: MSE(b, A) = E[(g(x) b Aδx) T (g(x) b Aδx)]

21 Statistical Linearization of Non-Linear Transforms [2/4] Expanding the MSE expression gives: MSE(b, A) = E[g T (x) g(x) 2 g T (x) b 2 g T (x) Aδx Derivatives are: + b T b 2 } b T {{ Aδx } +δx } T A{{ T Aδx} ] =0 tr{a P A T } MSE(b, A) b MSE(b, A) A = 2 E[g(x)]+2 b = 2 E[g(x)δx T ]+2 A P

22 Statistical Linearization of Non-Linear Transforms [3/4] Setting derivatives with respect to b and A zero gives b = E[g(x)] A = E[g(x)δx T ] P 1. Thus we get the approximations E[g(x)] E[g(x)] Cov[g(x)] E[g(x)δx T ] P 1 E[g(x)δx T ] T. The mean is exact, but the covariance is approximation. The expectations have to be calculated in closed form!

23 Statistical Linearization of Non-Linear Transforms [4/4] Statistical linearization The statistically linearized Gaussian approximation to the joint distribution of x and y = g(x)+q where x N(m, P) and q N(0, Q) is given as where ( ) x N y µ S = E[g(x)] (( m µ S ), ( P CS C T S S S )), S S = E[g(x)δx T ] P 1 E[g(x)δx T ] T + Q C S = E[g(x)δx T ] T.

24 Statistically Linearized Filter [1/3] The statistically linearized filter (SLF) can be derived in the same manner as EKF. Statistical linearization is used instead of Taylor series based linearization. Requires closed form computation of the following expectations for arbitrary x N(m, P): where δx = x m. E[f(x)] E[f(x)δx T ] E[h(x)] E[h(x)δx T ],

25 Statistically Linearized Filter [2/3] Statistically linearized filter Prediction (expectations w.r.t. x k 1 N(m k 1, P k 1 )): m k = E[f(x k 1)] P k = E[f(x k 1)δx T k 1 ] P 1 k 1 E[f(x k 1)δx T k 1 ]T + Q k 1, Update (expectations w.r.t. x k N(m k, P k )): v k = y k E[h(x k )] S k = E[h(x k )δx T k ](P k ) 1 E[h(x k )δx T k ]T + R k K k = E[h(x k )δx T k ]T S 1 k m k = m k + K k v k P k = P k K k S k K T k.

26 Statistically Linearized Filter [3/3] If the function g(x) is differentiable, we have E[g(x)(x m) T ] = E[G x (x)] P, where G x (x) is the Jacobian of g(x), and x N( m, P). In practice, we can use the following property for computation of the expectation of the Jacobian: µ(m) = E[g(x)] µ(m) m = E[G x(x)]. The resulting filter resembles EKF very closely. Related to replacing Taylor series with Fourier-Hermite series in the approximation.

27 Statistically Linearized Filter: Example [1/2] Recall the discretized pendulum model ( ) ( x 1 k x 1 xk 2 = k 1 + x2 k 1 t ) ( ) 0 xk 1 2 g sin(x1 k 1 }{{ ) t + q k 1 } f(x k 1 ) y k = sin(xk 1 }{{} ) +r k, h(x k ) If x N(m, P), by brute-force calculation we get ( ) m E[f(x)] = 1 + m 2 t m 2 g sin(m 1 ) exp( P 11 /2) t E[h(x)] = sin(m 1 ) exp( P 11 /2)

28 Statistically Linearized Filter: Example [2/2] The required cross-correlation for prediction step is ( ) E[f(x)(x m) T c11 c ] = 12, c 21 c 22 where c 11 = P 11 + t P 12 c 12 = P 12 + t P 22 c 21 = P 12 g t cos(m 1 ) P 11 exp( P 11 /2) c 22 = P 22 g t cos(m 1 ) P 12 exp( P 11 /2) The required term for update step is ( ) E[h(x)(x m) T cos(m1 ) P ] = 11 exp( P 11 /2) cos(m 1 ) P 12 exp( P 11 /2)

29 Advantages of SLF Global approximation, linearization is based on a range of function values. Often more accurate and more robust than EKF. No differentiability or continuity requirements for measurement and dynamic models. Jacobian matrices do not need to be computed. Often computationally efficient.

30 Limitations of SLF Works only with Gaussian noise terms. Expected values of the non-linear functions have to be computed in closed form. Computation of expected values is hard and error prone. If the expected values cannot be computed in closed form, there is not much we can do.

31 Summary EKF and SLF can be applied to filtering models of the form x k = f(x k 1 )+q k 1 y k = h(x k )+r k, EKF is based on Taylor series expansions of the non-linear functions f and h. Advantages: Simple, intuitive, computationally efficient Disadvantages: Local approximation, differentiability requirements, only for Gaussian noises. SLF is based on statistical linearization of the non-linearities: Advantages: Global approximation, no differentiability requirements, computationally efficient Disadvantages: Closed form computation of expectations, only for Gaussian noises.

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Simo Särkkä Aalto University Tampere University of Technology Lappeenranta University of Technology Finland November

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Simo Särkkä Aalto University, Finland November 18, 2014 Simo Särkkä (Aalto) Lecture 4: Numerical Solution of SDEs November

More information

f (x) = k=0 f (0) = k=0 k=0 a k k(0) k 1 = a 1 a 1 = f (0). a k k(k 1)x k 2, k=2 a k k(k 1)(0) k 2 = 2a 2 a 2 = f (0) 2 a k k(k 1)(k 2)x k 3, k=3

f (x) = k=0 f (0) = k=0 k=0 a k k(0) k 1 = a 1 a 1 = f (0). a k k(k 1)x k 2, k=2 a k k(k 1)(0) k 2 = 2a 2 a 2 = f (0) 2 a k k(k 1)(k 2)x k 3, k=3 1 M 13-Lecture Contents: 1) Taylor Polynomials 2) Taylor Series Centered at x a 3) Applications of Taylor Polynomials Taylor Series The previous section served as motivation and gave some useful expansion.

More information

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory

More information

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Kalman Filter Kalman Filter = special case of a Bayes filter with dynamics model and sensory

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

State Space Representation of Gaussian Processes

State Space Representation of Gaussian Processes State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)

More information

b n x n + b n 1 x n b 1 x + b 0

b n x n + b n 1 x n b 1 x + b 0 Math Partial Fractions Stewart 7.4 Integrating basic rational functions. For a function f(x), we have examined several algebraic methods for finding its indefinite integral (antiderivative) F (x) = f(x)

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Kernel Bayes Rule: Nonparametric Bayesian inference with kernels

Kernel Bayes Rule: Nonparametric Bayesian inference with kernels Kernel Bayes Rule: Nonparametric Bayesian inference with kernels Kenji Fukumizu The Institute of Statistical Mathematics NIPS 2012 Workshop Confluence between Kernel Methods and Graphical Models December

More information

Gaussian Processes for Sequential Prediction

Gaussian Processes for Sequential Prediction Gaussian Processes for Sequential Prediction Michael A. Osborne Machine Learning Research Group Department of Engineering Science University of Oxford Gaussian processes are useful for sequential data,

More information

f (r) (a) r! (x a) r, r=0

f (r) (a) r! (x a) r, r=0 Part 3.3 Differentiation v1 2018 Taylor Polynomials Definition 3.3.1 Taylor 1715 and Maclaurin 1742) If a is a fixed number, and f is a function whose first n derivatives exist at a then the Taylor polynomial

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

Introduction to Probabilistic Graphical Models: Exercises

Introduction to Probabilistic Graphical Models: Exercises Introduction to Probabilistic Graphical Models: Exercises Cédric Archambeau Xerox Research Centre Europe cedric.archambeau@xrce.xerox.com Pascal Bootcamp Marseille, France, July 2010 Exercise 1: basics

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Extended Kalman Filter Tutorial

Extended Kalman Filter Tutorial Extended Kalman Filter Tutorial Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Dynamic process Consider the following

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

TSRT14: Sensor Fusion Lecture 9

TSRT14: Sensor Fusion Lecture 9 TSRT14: Sensor Fusion Lecture 9 Simultaneous localization and mapping (SLAM) Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 9 Gustaf Hendeby Spring 2018 1 / 28 Le 9: simultaneous localization and

More information

E190Q Lecture 11 Autonomous Robot Navigation

E190Q Lecture 11 Autonomous Robot Navigation E190Q Lecture 11 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 013 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

A NEW NONLINEAR FILTER

A NEW NONLINEAR FILTER COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 89 Part II

More information

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017 COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University FEATURE EXPANSIONS FEATURE EXPANSIONS

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Homework 3 out tonight Start early!! Announcements Project milestones due today Please email to TAs 2 Parameter learning

More information

On Continuous-Discrete Cubature Kalman Filtering

On Continuous-Discrete Cubature Kalman Filtering On Continuous-Discrete Cubature Kalman Filtering Simo Särkkä Arno Solin Aalto University, P.O. Box 12200. FI-00076 AALTO, Finland. (Tel: +358 50 512 4393; e-mail: simo.sarkka@aalto.fi) Aalto University,

More information

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m.

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m. Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m. Candidates should submit answers to a maximum of four

More information

ECE Lecture #9 Part 2 Overview

ECE Lecture #9 Part 2 Overview ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by

More information

Optimal filtering with Kalman filters and smoothers a Manual for Matlab toolbox EKF/UKF

Optimal filtering with Kalman filters and smoothers a Manual for Matlab toolbox EKF/UKF Optimal filtering with Kalman filters and smoothers a Manual for Matlab toolbox EKF/UKF Jouni Hartiainen and Simo Särä Department of Biomedical Engineering and Computational Science, Helsini University

More information

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment and Mosaicing Feature Tracking and the Kalman Filter Image Alignment Applications Local alignment: Tracking Stereo Global alignment: Camera jitter elimination Image enhancement Panoramic

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Machine Learning (CS 567) Lecture 5

Machine Learning (CS 567) Lecture 5 Machine Learning (CS 567) Lecture 5 Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

Imprecise Filtering for Spacecraft Navigation

Imprecise Filtering for Spacecraft Navigation Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

Consider an ideal pendulum as shown below. l θ is the angular acceleration θ is the angular velocity

Consider an ideal pendulum as shown below. l θ is the angular acceleration θ is the angular velocity 1 Second Order Ordinary Differential Equations 1.1 The harmonic oscillator Consider an ideal pendulum as shown below. θ l Fr mg l θ is the angular acceleration θ is the angular velocity A point mass m

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Hilbert Space Problems

Hilbert Space Problems Hilbert Space Problems Prescribed books for problems. ) Hilbert Spaces, Wavelets, Generalized Functions and Modern Quantum Mechanics by Willi-Hans Steeb Kluwer Academic Publishers, 998 ISBN -7923-523-9

More information

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr-2009 1 /

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

CS8803: Statistical Techniques in Robotics Byron Boots. Hilbert Space Embeddings

CS8803: Statistical Techniques in Robotics Byron Boots. Hilbert Space Embeddings CS8803: Statistical Techniques in Robotics Byron Boots Hilbert Space Embeddings 1 Motivation CS8803: STR Hilbert Space Embeddings 2 Overview Multinomial Distributions Marginal, Joint, Conditional Sum,

More information

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!

Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/! Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!! WARNING! this class will be dense! will learn how to use nonlinear optimization

More information

Sec 4.1 Limits, Informally. When we calculated f (x), we first started with the difference quotient. f(x + h) f(x) h

Sec 4.1 Limits, Informally. When we calculated f (x), we first started with the difference quotient. f(x + h) f(x) h 1 Sec 4.1 Limits, Informally When we calculated f (x), we first started with the difference quotient f(x + h) f(x) h and made h small. In other words, f (x) is the number f(x+h) f(x) approaches as h gets

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

Exercises, module A (ODEs, numerical integration etc)

Exercises, module A (ODEs, numerical integration etc) FYTN HT18 Dept. of Astronomy and Theoretical Physics Lund University, Sweden Exercises, module A (ODEs, numerical integration etc) 1. Use Euler s method to solve y (x) = y(x), y() = 1. (a) Determine y

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

Linear Operators and Stochastic Partial Differential Equations in Gaussian Process Regression

Linear Operators and Stochastic Partial Differential Equations in Gaussian Process Regression Linear Operators and Stochastic Partial Differential Equations in Gaussian Process Regression Simo Särkkä Department of Biomedical Engineering and Computational Science Aalto University, Finland simo.sarkka@tkk.fi

More information

CS281A/Stat241A Lecture 17

CS281A/Stat241A Lecture 17 CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian

More information

Lecture 13: Data Modelling and Distributions. Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1

Lecture 13: Data Modelling and Distributions. Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1 Lecture 13: Data Modelling and Distributions Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1 Why data distributions? It is a well established fact that many naturally occurring

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES

NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

CMU-Q Lecture 24:

CMU-Q Lecture 24: CMU-Q 15-381 Lecture 24: Supervised Learning 2 Teacher: Gianni A. Di Caro SUPERVISED LEARNING Hypotheses space Hypothesis function Labeled Given Errors Performance criteria Given a collection of input

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

New Fast Kalman filter method

New Fast Kalman filter method New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the

More information

Time Series I Time Domain Methods

Time Series I Time Domain Methods Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT

More information

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

c 2014 Krishna Kalyan Medarametla

c 2014 Krishna Kalyan Medarametla c 2014 Krishna Kalyan Medarametla COMPARISON OF TWO NONLINEAR FILTERING TECHNIQUES - THE EXTENDED KALMAN FILTER AND THE FEEDBACK PARTICLE FILTER BY KRISHNA KALYAN MEDARAMETLA THESIS Submitted in partial

More information

Vector Derivatives and the Gradient

Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 10 ECE 275A Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego

More information

Math 3338, Homework 4. f(u)du. (Note that f is discontinuous, whereas F is continuous everywhere.)

Math 3338, Homework 4. f(u)du. (Note that f is discontinuous, whereas F is continuous everywhere.) Math 3338, Homework 4. Define a function f by f(x) = { if x < if x < if x. Calculate an explicit formula for F(x) x f(u)du. (Note that f is discontinuous, whereas F is continuous everywhere.) 2. Define

More information

outline Nonlinear transformation Error measures Noisy targets Preambles to the theory

outline Nonlinear transformation Error measures Noisy targets Preambles to the theory Error and Noise outline Nonlinear transformation Error measures Noisy targets Preambles to the theory Linear is limited Data Hypothesis Linear in what? Linear regression implements Linear classification

More information

Linear Dynamical Systems (Kalman filter)

Linear Dynamical Systems (Kalman filter) Linear Dynamical Systems (Kalman filter) (a) Overview of HMMs (b) From HMMs to Linear Dynamical Systems (LDS) 1 Markov Chains with Discrete Random Variables x 1 x 2 x 3 x T Let s assume we have discrete

More information

ON MEASURING HEDGE FUND RISK

ON MEASURING HEDGE FUND RISK ON MEASURING HEDGE FUND RISK Alexander Cherny Raphael Douady Stanislav Molchanov Department of Probability Theory Faculty of Mathematics and Mechanics Moscow State University 119992 Moscow Russia E-mail:

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

Quaternion based Extended Kalman Filter

Quaternion based Extended Kalman Filter Quaternion based Extended Kalman Filter, Sergio Montenegro About this lecture General introduction to rotations and quaternions. Introduction to Kalman Filter for Attitude Estimation How to implement and

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim An idea how to solve some of the problems 5.2-2. (a) Does not converge: By multiplying across we get Hence 2k 2k 2 /2 k 2k2 k 2 /2 k 2 /2 2k 2k 2 /2 k. As the series diverges the same must hold for the

More information

ECE531 Lecture 8: Non-Random Parameter Estimation

ECE531 Lecture 8: Non-Random Parameter Estimation ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction

More information

Dual Estimation and the Unscented Transformation

Dual Estimation and the Unscented Transformation Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology

More information