The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

Similar documents
Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

COS Lecture 16 Autonomous Robot Navigation

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Time Series Analysis

Bayesian Methods in Positioning Applications

Autonomous Navigation for Flying Robots

E190Q Lecture 10 Autonomous Robot Navigation

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Estimation, Detection, and Identification CMU 18752

Data Fusion Kalman Filtering Self Localization

1 Kalman Filter Introduction

2D Image Processing. Bayes filter implementation: Kalman filter

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

E190Q Lecture 11 Autonomous Robot Navigation

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Lego NXT: Navigation and localization using infrared distance sensors and Extended Kalman Filter. Miguel Pinto, A. Paulo Moreira, Aníbal Matos

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

2D Image Processing. Bayes filter implementation: Kalman filter

CS491/691: Introduction to Aerial Robotics

Nonlinear State Estimation! Extended Kalman Filters!

Particle lter for mobile robot tracking and localisation

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science.

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

An Introduction to the Kalman Filter

Kalman Filters with Uncompensated Biases

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Particle Filters. Outline

State Observers and the Kalman filter

Artificial Intelligence

CONDENSATION Conditional Density Propagation for Visual Tracking

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Lecture 7: Optimal Smoothing

Probabilistic Graphical Models

Kalman Filter. Man-Wai MAK

Introduction to Mobile Robotics Probabilistic Robotics

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

Linear Dynamical Systems

Mobile Robot Localization

Bayesian Inference and Decision Theory

Using the Kalman Filter for SLAM AIMS 2015

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Nonlinear Identification of Backlash in Robot Transmissions

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Sensor Tasking and Control

DATA ASSIMILATION FOR FLOOD FORECASTING

Open Economy Macroeconomics: Theory, methods and applications

Simultaneous state and input estimation of non-linear process with unknown inputs using particle swarm optimization particle filter (PSO-PF) algorithm

Robotics 2 Target Tracking. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Lecture Note 12: Kalman Filter

Multi-Robotic Systems

Linear Models for Regression CS534

Intermediate Lab PHYS 3870

Estimation and Prediction Scenarios

CprE 281: Digital Logic

OPTIMAL CONTROL AND ESTIMATION

CSE 483: Mobile Robotics. Extended Kalman filter for localization(worked out example)

AUTOMOTIVE ENVIRONMENT SENSORS

Incorporation of Time Delayed Measurements in a. Discrete-time Kalman Filter. Thomas Dall Larsen, Nils A. Andersen & Ole Ravn

Linear-Optimal State Estimation

CprE 281: Digital Logic

Local Positioning with Parallelepiped Moving Grid

Absolute Value Equations and Inequalities. Use the distance definition of absolute value.

A thorough derivation of back-propagation for people who really want to understand it by: Mike Gashler, September 2010

2D Image Processing (Extended) Kalman and particle filter

Kalman Based Temporal Difference Neural Network for Policy Generation under Uncertainty (KBTDNN)

State Estimation of Linear and Nonlinear Dynamic Systems

MAE 493G, CpE 493M, Mobile Robotics. 6. Basic Probability

For final project discussion every afternoon Mark and I will be available

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Lecture 8: Signal Detection and Noise Assumption

Terrain Navigation Using the Ambient Magnetic Field as a Map

ME 132, Dynamic Systems and Feedback. Class Notes. Spring Instructor: Prof. A Packard

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Crib Sheet : Linear Kalman Smoothing

Module 5: Function Composition

Linear Models for Regression CS534

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Error reduction in GPS datum conversion using Kalman filter in diverse scenarios Swapna Raghunath 1, Malleswari B.L 2, Karnam Sridhar 3

Neural networks: Associative memory

TSRT14: Sensor Fusion Lecture 1

Hidden Markov Models (recap BNs)

Bayesian Methods / G.D. Hager S. Leonard

Autonomous Mobile Robot Design

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.

6 Central Limit Theorem. (Chs 6.4, 6.5)

Solving Equations with Addition and Subtraction

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

Transcription:

The Kalman Filter (part 1) HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev http://www.cs.iastate.edu/~alex/classes/2007_spring_575x/ March 5, 2007 HCI/ComS 575X: Computational Perception Iowa State University, SPRING 2007 Copyright 2007, Alexander Stoytchev Rudolf Emil Kalman Definition A Kalman filter is simply an optimal recursive data processing algorithm Under some assumptions the Kalman filter is optimal with respect to virtually any criterion that makes sense. [http://www.cs.unc.edu/~welch/kalman/kalmanbiblio.html] Definition The Kalman filter incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest. Why do we need a filter? No mathematical model of a real system is perfect Real world disturbances Imperfect Sensors 1

Application: Radar Tracking Application: Lunar Landing Application: Missile Tracking Application: Sailing Application: Robot Navigation 2

Application: Other Tracking Application: Head Tracking Face & Hand Tracking Brown and Hwang (1992) Introduction to Random Signals and Applied Kalman Filtering Ch 5: The Discrete Kalman Filter Maybeck, Peter S. (1979) Chapter 1 in ``Stochastic models, estimation, and control'', Mathematics in Science and Engineering Series, Academic Press. Arthur Gelb, Joseph Kasper, Raymond Nash, Charles Price, Arthur Sutherland (1974) Applied Optimal Estimation MIT Press. 3

A Simple Recursive Example Problem Statement: Given the measurement sequence: z 1, z 2,, z n find the mean First Approach 1. Make the first measurement z 1 Store z 1 and estimate the mean as µ 1 =z 1 2. Make the second measurement z 2 Store z 1 along with z 2 and estimate the mean as µ 2 = (z 1 +z 2 )/2 First Approach (cont d) First Approach (cont d) 3. Make the third measurement z 3 Store z 3 along with z 1 and z 2 and estimate the mean as µ 3 = (z 1 +z 2 +z 3 )/3 n. Make the n-th measurement z n Store z n along with z 1, z 2,, z n-1 and estimate the mean as µ n = (z 1 + z 2 + + z n )/n Second Approach Second Approach (cont d) 1. Make the first measurement z 1 Compute the mean estimate as µ 1 =z 1 2. Make the second measurement z 2 Compute the estimate of the mean as a weighted sum of the previous estimate µ 1 and the current measurement z 2: µ 2 = 1/2 µ 1 +1/2 z 2 Store µ 1 and discard z 1 Store µ 2 and discard z 2 and µ 1 4

Second Approach (cont d) Second Approach (cont d) 3. Make the third measurement z 3 Compute the estimate of the mean as a weighted sum of the previous estimate µ 2 and the current measurement z 3: n. Make the n-th measurement z n Compute the estimate of the mean as a weighted sum of the previous estimate µ n-1 and the current measurement z n: µ 3 = 2/3 µ 2 +1/3 z 3 µ n = (n-1)/n µ n-1 +1/n z n Store µ 3 and discard z 3 and µ 2 Store µ n and discard z n and µ n-1 Comparison Analysis The second procedure gives the same result as the first procedure. It uses the result for the previous step to help obtain an estimate at the current step. Batch Method Recursive Method The difference is that it does not need to keep the sequence in memory. Second Approach (rewrite the general formula) Second Approach (rewrite the general formula) µ n = (n-1)/n µ n-1 +1/n z n µ n = (n-1)/n µ n-1 +1/n z n µ n = µ n-1 + 1/n (z n - µ n-1 ) 5

Second Approach (rewrite the general formula) Second Approach (rewrite the general formula) µ n = (n-1)/n µ n-1 +1/n z n µ n = µ n-1 + 1/n (z n - µ n-1 ) Old Estimate Gain Factor Difference Between New Reading and Old Estimate The Gaussian Function Gaussian Properties Gaussian pdf Properties If and Then 6

pdf for Properties Summation and Subtraction A simple example using diagrams Conditional density of position based on measured value of z 1 Conditional density of position based on measured value of z 1 uncertainty position measured position 7

Conditional density of position based on measurement of z 2 alone Conditional density of position based on measurement of z 2 alone uncertainty 2 measured position 2 Conditional density of position based on data z 1 and z 2 Propagation of the conditional density uncertainty estimate position estimate Propagation of the conditional density Propagation of the conditional density movement vector movement vector expected position just prior to taking measurement 3 expected position just prior to taking measurement 3 8

Propagation of the conditional density Updating the conditional density after the third measurement position uncertainty uncertainty 3 σ x (t 3 ) σ x (t 3 ) z 3 x(t3) z 3 measured position 3 position estimate Questions? How should we combine the two measurements? Now let s do the same thing but this time we ll use math σ Z2 σ Z1 9

Calculating the new mean Calculating the new mean Calculating the new mean Calculating the new variance σ Z2 σ Z1 Why is this not z 1? Calculating the new variance Calculating the new variance 10

Calculating the new variance Calculating the new variance Calculating the new variance Calculating the new variance Why is this result different from the one given in the paper? Remember the Gaussian Properties? 11

Remember the Gaussian Properties? The scaling factors must be squared! If and Then This is a 2 not a The scaling factors must be squared! Therefore the new variance is Try to derive this on your own. Another Way to Express The New Position Another Way to Express The New Position 12

Another Way to Express The New Position The equation for the variance can also be rewritten as Adding Movement Adding Movement Adding Movement Properties of K If the measurement noise is large K is small 0 13

THE END 14