Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Similar documents
Lecture 20 Aspects of Control

1 Kalman Filter Introduction

A Study of Covariances within Basic and Extended Kalman Filters

From Bayes to Extended Kalman Filter

CS 532: 3D Computer Vision 6 th Set of Notes

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Robot Localization and Kalman Filters

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Lecture 10. Rigid Body Transformation & C-Space Obstacles. CS 460/560 Introduction to Computational Robotics Fall 2017, Rutgers University

CS 5522: Artificial Intelligence II

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

CS 5522: Artificial Intelligence II

CS491/691: Introduction to Aerial Robotics

Linear Dynamical Systems

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science.

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

The Kalman Filter ImPr Talk

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Today. Why idealized? Idealized physical models of robotic vehicles. Noise. Idealized physical models of robotic vehicles

Adaptive State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018

Data Fusion Kalman Filtering Self Localization

Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

E190Q Lecture 11 Autonomous Robot Navigation

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

An Introduction to the Kalman Filter

THE USE OF KALMAN FILTRATION TO ESTIMATE CHANGES OF TRUNK INCLINATION ANGLE DURING WEIGHTLIFTING 1. INTRODUCTION

Lecture 7: Optimal Smoothing

Time Series Analysis

Detection and Estimation Theory

Probabilistic Graphical Models

Lecture 6: Bayesian Inference in SDE Models

Artificial Intelligence

State Estimation with a Kalman Filter

Machine Learning 4771

COS Lecture 16 Autonomous Robot Navigation

9 Multi-Model State Estimation

The Kalman Filter. An Algorithm for Dealing with Uncertainty. Steven Janke. May Steven Janke (Seminar) The Kalman Filter May / 29

MAC 2311 Calculus I Spring 2004

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

Introduction to Mobile Robotics Probabilistic Robotics

Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer

Kalman Filters, Dynamic Bayesian Networks

Lecture 9. Time series prediction

CDS 101: Lecture 2.1 System Modeling. Lecture 1.1: Introduction Review from to last Feedback week and Control

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Online monitoring of MPC disturbance models using closed-loop data

COMP 551 Applied Machine Learning Lecture 20: Gaussian processes

Autonomous Navigation for Flying Robots

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

System identification and sensor fusion in dynamical systems. Thomas Schön Division of Systems and Control, Uppsala University, Sweden.

Optimization-Based Control

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

TSRT14: Sensor Fusion Lecture 8

Gaussian Process Approximations of Stochastic Differential Equations

CSE 473: Artificial Intelligence

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Unit #6 Basic Integration and Applications Homework Packet

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Self-Driving Car ND - Sensor Fusion - Extended Kalman Filters

Lecture 3: Functions of Symmetric Matrices

Approximate Bayesian Computation and Particle Filters

Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

VN-100 Velocity Compensation

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Data assimilation with and without a model

Gaussians. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Censoring and Fusion in Non-linear Distributed Tracking Systems with Application to 2D Radar

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016

Autonomous Mobile Robot Design

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

A Crash Course on Kalman Filtering

Smoothers: Types and Benchmarks

c 2014 Krishna Kalyan Medarametla

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Kernel Bayes Rule: Nonparametric Bayesian inference with kernels

Gaussian with mean ( µ ) and standard deviation ( σ)

Reasoning Under Uncertainty: Bayesian networks intro

Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes

Day 5 Notes: The Fundamental Theorem of Calculus, Particle Motion, and Average Value

Chapter 05: Hidden Markov Models

Particle Filtering for Data-Driven Simulation and Optimization

An introduction to particle filters

Lecture Note 12: Kalman Filter

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Transcription:

Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your homework But, this is 4xx, so not directly applying formulas careful reasoning required It may require you dig out your old textbooks (e.g. calculus, etc.) Again, ask questions (if you have) and ask them earlier

CS 460/560 Introduction to Computational Robotics Fall 2017, Rutgers University Lecture 06 Kalman Filter Intro Instructor: Jingjin Yu

Outline Uncertainty Model of dynamical systems Bayesian filtering: the concept An illustrative example Applications of Kalman filters Derivation of Kalman Filter A 1D example

Uncertainty An every day experience Where exactly are we? E.g., in a classroom We do not know! Position is estimated We can get philosophical A more accurate model A dynamical system with state x (a function of time) Positions (i.e., x) are estimates Associate each location x with some probability This gives us a probability distribution P(x) For a robot, P(x) changes as the robot moves around Kalman filter (and other Bayesian filters) tracks P x as x changes over time

Modeling Dynamical Systems A dynamical system (e.g., a car) is often modeled as x ሶ = f(x, u) x: the state of the system, E.g., for a car, x = (x 1, x 2, θ) x ሶ = dx dt For a car, ሶ is the time derivative, i.e., the velocity of the system x = ( x 1 ሶ, x 2 ሶ, θ) ሶ u: the control input E.g., for a real car, u = (θ, v) θ is the front wheel bearing v is the forward speed (for a 2-wheel drive, assuming no slippery) u may be speed, acceleration, and so on f: system evolution function How do x, u determine xሶ In discrete settings, often written as x t = f(x t1, u t1 ) May view this as integration of the continuous model: x t = x t1 + t1 Often written as x k = f(x k1, u k1 ) t xdt ሶ

Modeling Dynamical Systems, Continued Examples A car going at fixed speed along x 1 -axis: x 1 ሶ = 1 In this case, f x, u = 1 is a constant An accelerating car along x 1 -axis with acceleration a: ሶ u = a, the acceleration, f(x, u) = at, does not depend on x x 1 = at A car going clockwise along the unit circle around the origin at unit speed (x 1, x 2 ) x ሶ = ( xሶ 1, xሶ 2, θ) ሶ = (x 2, x 1, 1) Initial condition: x 1 = 1, x 2 = 0 The car will keep circling the unit circle at unit speed So it takes 2π time to go one round Linear and non-linear systems Linear systems: f is a linear function, e.g., x ሶ = Ax + Bu Non-linear systems: f is non-linear What to grasp from the last two slides? Dynamical systems may be modeled as we have described In particular, given x k1, u k1, and f(x, u), we can predict x k (x 2, x 1 )

Kalman Filter as a Bayesian Filter Kalman filter is a type of Bayesian filters over a Hidden Markov model x k1 u k1 x k x k1 u k1 z k1 z k x k x i s are hidden (actual) system states They cannot be known exactly We can only observe x i using sensors to get z i The (discrete) process is modeled as a two-step iterative one Noisy state change: x k = f x k1, u k1 + ω k1 Noisy measurement after state change: z k = h x k + ν k More details coming up The data that we get are u 0, z 1, u 1, z 2, u 2, z 3, We want to provide x k Ƹ as an accurate estimate of x k Yields Kalman filters, particle filters, and so on white noises

An Example A hypothetical measurement of a variable x Mean: 0.5 Variance: 0.01 200 sequential measurements Note: only the mean is shown in the second figure, not the variance Raw measurement Filtered result 0.8 0.6 0.7 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.1 0.2 0.1 0 0 50 100 150 200 250 0 0 50 100 150 200 250 Important: Kalman filter is not simple averaging! It has (limited) predictive power

Applications Numerous applications GPS Minimizes error in tracking the position in altitude, latitude, longitude Reduces error from ~30 meters to less than 5 meters Aircraft autopilot Internal inertial guidance system generates errors over time Minimize with a 6D Kalman filter: yaw, pitch, roll, altitude, latitude, longitude Many, many other similar applications Missile guidance Radar Economic signals, e.g., stock time series Weather forecasting

Kalman Filter in More Detail Kalman filter is a minimum mean square estimator (MMSE) for estimating the state x R n of a discrete-time controlled process with a linear system equation and a linear observer under white noise. Linear stochastic system x k = Ax k1 + Bu k1 + ω k1, ω k1 N 0, Q (1) With a linear observer (sensor) z k = Hx k + ν k, ν k ~N 0, R (2) ω and ν are unknown but independent (i.e., Q and R are unknown) Kalman filter tries to provide estimate of true x k through the minimization of the estimation error based on (1) and 2 It does the minimization using the MMSE

Input and Output of a Kalman Filter A Kalman filter provides an estimate of x k under uncertainty This is an iterative process In each iteration, the input: x k1, P k1, u k1, z k,a, B, H x k1, P k1 : estimated system state/variance at time k 1; x 0, P 0 are guessed u k1 : system input at time k 1, e.g., how hard the gas pedal is pressed f(x k1, u k1 ) = Ax k1 + Bu k1, A and B are known z k =Hx k : the observation of x k, H is known The output: x k, P k x k, P k : estimated system state/variance at time k An illustration x k1 x k u k1 x k1 x k u k1 x k x k1 The Kalman filter computes a distribution The estimate is not a single value! For 1D, two values: mean + variance For dimension n, an n-vector and an n n covariance matrix Same applies to other variables z k

Deriving the MMSE (I) x k1 u k1 x k The goal is to find true state x k But recall this is not possible because x k is a hidden state A trick: x, P, A, B, u, z are high dimensional, but can treat as 1D In each iteration, a Kalman filter does two updates Time update: x k = A x k1 + Bu k1 Error of this step e k x k x k (a priori) Variance is P k = E[e k e k T ] Measurement update: x k = x k + K k z k H x k The term z k H x k is called the measurement innovation It gives us the new information from z k that is not already in x k Error of this step e k x k x k (a posteriori) Variance is P k = E[e k e T k ] Kalman filter seeks the best K k to minimize E[ e k 2 ] This is the same as minimizing the trace of P k e k x k x k = x k x k K k z k H x k z k1 z k

Deriving the MMSE (II) e k x k x k = x k x k K k z k H x k = x k x k K k z k Hx k + Hx k H x k = x k x k K k H x k x k K z k Hx k = I K k H x k x k K k z k Hx k = I K k H e k K k ν k x k1 z k1 u k1 x k z k P k = E e k e k T = I K k H E e k e k T I K k H T + E[K k ν k ν k T K k T ] = I K k H P K I K k H T + K k RK k T To minimize the trace of P k, take tr(p k) K k = 0 Yields K k = P k H T HP k H T +R, the optimal Kalman gain

Interpreting the Kalman Gain x k1 u k1 x k Recall the Kalman gain K k = P k H T z k1 HP k H T +R is used for computing x k z k x k = x k + K k z k H x k As R 0, measurement becomes more accurate, K k H 1 x k = x k + K k z k H x k H 1 z k As P k 0, state propagation becomes more accurate, K k 0 x k = x k + K k z k H x k x k

Deriving The Iterative Update Algorithm Recall - in each step, two updates Time update: x k = A x k1 + Bu k1 Error of this step e k x k x k (a priori) Variance is P k = E[e k e k T ] Measurement update: x k = x k + K k z k H x k Error of this step e k x k x k (a posteriori) Variance is P k = E[e k e k T ] For time update x k = A x k1 + Bu k1 e k x k x k = Ax k1 + Bu k1 + ω k1 x k = A x k1 x k1 + ω k1 = Ae k1 + ω k1 P k = E e k e k T T = AE e k1 e k1 = AP k1 A T + Q A T + Q For measurement update x k = x k + K k z k H x k Already computed P k H T K k = HP k H T +R P k = I K k H P K I K k H T + K k RK T k = I K k H P k

Tuning Parameters and Running the Filter We have the iterative update algorithm Time update x k = A x k1 + Bu k1 P k = AP k1 A T + Q To run the algorithm First estimate Q and R offline Starting from some estimate and then tuning This is known as system identification Then start filter with some initial x 0 and P 0 Usually P k and K k will quickly converge Measurement update x k = x k + K k z k H x k P k H T K k = HP k H T + R P k = I K k H P k

Example: Estimating a Random Constant Suppose we are measuring a random constant, e.g., temperature of a light bulb System: x k = x k1 + ω k1, x k, ω k R Observation: z k = x k + ν k, z k, ν k R Filter update equations Time update x k = x k1 P k = P k1 + Q Measurement update x k = x k + K k z k H x k K k = P k P k + R 1 P k = 1 K k P k Running the example If both (real, not our estimated) Q and R are large, it s hopeless If P 0 is small, it trusts x_0 a lot If x 0 is bad, then it takes long time to converge with small P 0