Tracking with Kalman Filter

Similar documents
STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Time-Varying Systems and Computations Lecture 6

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Invariant deformation parameters from GPS permanent networks using stochastic interpolation

Linear Approximation with Regularization and Moving Least Squares

e i is a random error

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

Composite Hypotheses testing

Chapter 13: Multiple Regression

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Statistics for Economics & Business

Chapter 3. Two-Variable Regression Model: The Problem of Estimation

Lecture Notes on Linear Regression

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Lecture 3 Stat102, Spring 2007

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Quantifying Uncertainty

The Feynman path integral

4DVAR, according to the name, is a four-dimensional variational method.

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Generalized Linear Methods

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

THE Kalman filter (KF) rooted in the state-space formulation

Primer on High-Order Moment Estimators

18.1 Introduction and Recap

Negative Binomial Regression

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Module 9. Lecture 6. Duality in Assignment Problems

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques

Exercises. 18 Algorithms

Chapter 6. Supplemental Text Material

Basic Business Statistics, 10/e

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

A Robust Method for Calculating the Correlation Coefficient

November 5, 2002 SE 180: Earthquake Engineering SE 180. Final Project

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Report on Image warping

PHYS 705: Classical Mechanics. Calculus of Variations II

Discussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek

Chapter Newton s Method

Robert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations

Notes on Frequency Estimation in Data Streams

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

LECTURE 9 CANONICAL CORRELATION ANALYSIS

Lecture 6: Introduction to Linear Regression

Systems of Equations (SUR, GMM, and 3SLS)

The Second Anti-Mathima on Game Theory

Chapter 11: Simple Linear Regression and Correlation

Feature Selection: Part 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

Grover s Algorithm + Quantum Zeno Effect + Vaidman

ONE-DIMENSIONAL COLLISIONS

Gravitational Acceleration: A case of constant acceleration (approx. 2 hr.) (6/7/11)

I529: Machine Learning in Bioinformatics (Spring 2017) Markov Models

Originated from experimental optimization where measurements are very noisy Approximation can be actually more accurate than

From Biot-Savart Law to Divergence of B (1)

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Polynomial Regression Models

1 Convex Optimization

MATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018

12. The Hamilton-Jacobi Equation Michael Fowler

where the sums are over the partcle labels. In general H = p2 2m + V s(r ) V j = V nt (jr, r j j) (5) where V s s the sngle-partcle potental and V nt

Classification as a Regression Problem

Relevance Vector Machines Explained

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

Lecture 12: Classification

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Estimation: Part 2. Chapter GREG estimation

1 Derivation of Point-to-Plane Minimization

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Hidden Markov Models

Curve Fitting with the Least Square Method

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

Linear Feature Engineering 11

Lecture 4: Universal Hash Functions/Streaming Cont d

Feb 14: Spatial analysis of data fields

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

β0 + β1xi. You are interested in estimating the unknown parameters β

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

Week 5: Neural Networks

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

x i1 =1 for all i (the constant ).

Transcription:

Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle, VA 22904 Vrgna Image and Vdeo Analyss Trackng The trackng problem can usually be broken down nto two subproblems (1) Acquston/Detecton: fndng the object of nterest (the target) for the frst tme (2) Trackng/Predcton: guessng where t's gong to be n the next frame 1

Acquston/Detecton Centrod Trackers -- fnd the centrod n the regon of nterest Edge Trackers -- track the leadng edge of a target Outlne Trackers track the outlne of the target (usng edge detecton) Template Trackers -- track a target template Deformable Template Trackers -- change the template as you go Adaptve Template Trackers -- let mage sequence re-defne the template Snake Trackers track the boundary of an object by matchng the correspondng snake between two frames Predcton/Estmaton The goal of the predcton porton s to estmate the next poston of the target, gven the prevously acqured mage sequence Once we have an estmate of the next target poston, we can look at a small submage for the target -- and avod searchng the whole mage agan Ths submage s called the track gate -- the sze the of track gate s dctated by the accuracy of the tracker Soluton to the predcton problem borrows heavly from estmaton theory -- we ll dscuss the standard soluton 2

Kalman Flter The Kalman flter s the optmal flter (n the least mean squared error sense) for track predcton The Kalman flter s used heavly by control theorsts -- t's used everywhere from the Space Shuttle to the Patrot mssle system to the NY stock exchange If we assume a constant velocty model for our target, the Kalman flter reduces to the alpha-beta flter -- we'll see alpha and beta soon Kalman Flter The Kalman flter s a combnaton of a predctor and a flter: - The predctor estmates the locaton of the target at tme k gven k-1 observatons - When observaton k arrves, the estmate s mproved usng an optmal flter to estmate the target poston at tme k +1: the fltered estmate s the best estmate of the true locaton of the target gven k observatons at tme k Both the predctor and the flter are lnear systems The Kalman flter wll not be derved here; the equatons wll be set up and explaned for a trackng applcaton 3

Glossary of Terms X k -- an mage sequence k -- tme, k=0 denotes the frst acquston,j -- row and column postons n the mage î k k 1 -- row of the predcted target at tme k gven the frst k - 1 observatons ĵ k k 1 -- column of the predcted target G -- the track gate -- a submage of X k k -- row at tme k j k -- column at tme k -- velocty estmate for tme k+1 n the drecton ˆv k+1 k j ˆv k+1 k -- same for j drecton v k -- velocty at tme k n the drecton j v k -- same for j drecton u k -- velocty drft nose process n the drecton δt -- change n tme between frames 0 -- observed value -- sometmes called z k k n other places (same goes for j) -- we get ths observaton from the template matcher, centrod fnder, matched flter, etc. Moton Model Constant Velocty We wll use a constant velocty model k+1 = k + δt v k Ths s a lnear system. We're sayng that the frst dervatve (of poston) s farly constant, and the second dervatve s almost zero. We wll model a small acceleraton (called the "drft") as a whte nose process. So, as long as our temporal samplng rate s suffcent, ths should be a good model for moton. Here, acceleraton s vewed as a nose process, and we just have to choose a reasonable varance. Asde: we can have a constant acceleraton model wth the Kalman flter -- ths s the alpha-beta-gamma flter. Ths model has the acceleraton terms n addton to poston and velocty (for each drecton, and j). 4

Predcton of Poston Predcted poston (*) î k+1 k = îk k +δt ˆv k+1 k Fltered estmate of poston s (**) î k k = îk k 1 +α ( 0 k k îk k 1 ) = (1 α k )îk k 1 +α 0 k k The gan α k determnes the balance between the prevous track hstory and the new observaton If α k s large (near 1), we beleve the observatons are very relable (ths s essentally gnorng the track hstory) If t's small (near 0), we beleve that there s a lot of measurement nose (ths s essentally gnorng the observaton) Predcton of Velocty Predcted velocty (***) ˆv k+1 k = ˆv k k 1 + β k ( 0 k îk k 1 ) /δt Kalman gan β k controls how we let the new observaton affect the predcted velocty near 0, t means that we thnk the observatons are unrelable and that the actual velocty s REALLY a constant -- n ths case, the observaton does not affect the predcted velocty near 1, then the observatons are relable (we thnk!). Here, we allow the velocty to drft (acceleraton). All of the above equatons repeat for the j drecton 5

Computng Kalman Gans For the alpha-beta flter (the constant velocty model), we can pre-compute the gans -- yes, before we mplement the tracker (ths assumes statonarty) Also, the gans converge quckly to constants -- so, we don't have compute an nfnte number of the gan values Then, we can just plug the observatons nto the three predcton and flterng equatons and "Kalmanze"! The gans depend upon the nose varances and the state vector error covarance matrx (See Appendx for detals on how to compute gans) Executng the Kalman Flter (1) Use the startng state condtons to get alpha and beta for several k's (untl convergence) -- pre-store these values (the frst k = 1) (See Appendx ****) (2) Acqure the target usng the whole mage to get ntal coordnates (3) Use (**) to get the fltered poston, then (***) to get the predcted velocty, then (*) to get the predcted poston (same goes for the correspondng j terms) (4) Acqure target wthn track gate centered at predcted poston (5) Go to (3) Processes for and j run ndependently and concurrently 6

More If there are no observatons at tme k, the track s coasted -- we use observed poston = predcted poston the next predcted poston s then smply the last predcted poston plus the velocty multpled by the frame tme Problems wth Kalman Some constants used n computng the gans are dffcult to obtan - actual errors may not comply statstcally w/ Kalman model -- the "dvergence phenomenon - the real world may not obey the Kalman assumptons: (1) observatons are sgnal plus whte nose (2) the sgnal can be modeled by a lnear system drven by whte nose (3) all parameters of the two nose processes and the lnear system are known precsely How do these terms relate to our tracker? the sgnal: the poston of the target nose: we assume whte nose drft n the target velocty and measurement nose n the target locaton the lnear system: the constant velocty model 7

Appendx: Kalman Gans Let the state vector X be defned for our system as:! X k = "# k v k $ %&. The state vector of the predctor s ˆ X k k 1 = " ˆ # $ ˆ k k 1 v k k 1 The state vector for the flter ˆ X k k s constructed smlarly. (these are the state matrces for the drecton; the same goes for the j dr.) % &'. We'd lke to measure the error n the predctons and n the fltered results, so that we can mnmze error. The error n the predcted state vector s: X k ˆ X k k 1 and the error for the flter state vector s: X k ˆ X k k The above errors are stochastc vectors -- hence, they have covarance matrces The predcted state vector covarance matrx s: & T P ( )( ) # k k = E X k Xˆ k k X k Xˆ 1 1 k k 1 $%! " The fltered state vector error covarance matrx s: & T P ( )( ) # k k = E X k Xˆ k k X k Xˆ k k $%! " Functon of the Kalman flter: choose α k and β k to mnmze P k k Why? You want the mnmum mean squared error estmate. 8

For the nose processes, we have σ n 2 -- the measurement nose varance σ u 2 -- the velocty drft nose varance The soluton that mnmzes P k k (We won't prove t here) s: α k = 11 P k k 1 11 P k k 1 2 +σ and β k = δtp k k 1 11 2 n +σ (****) n P k k 1 So, we have equatons for α k and β k, but P k k 1 s changng wth each teraton For our constant velocty alpha-beta model, P k k 1 can be computed recursvely as follows: 11 P k+1 k 12 P k+1 k 21 P k+1 k 22 P k+1 k 11 = P k k 1 12 = P k k 1 12 = P k +1 k 22 = P k k 1 12 + 2P k k 1 22 + P k k 1 22 + P k k 1 " P k k 1 $ # +σ 2 u P 12 k k 1 21 P 11 12 ( k k 1 + P k k 1 ) 2 11 2 P k k 1 +σ n 11 12 12 P k k 1 + P k k 1 % 11 2 ' P k k 1 +σ n & ( ) 2 11 P k k 1 + σ n 2 We just need ntal condtons for P k k 1 (= P 1 0 ) To do ths correctly, we need to defne two addtonal varances: σ 2 -- varance n the ntal row poston (there's a correspondng term for column poston j) 2 -- varance n the ntal velocty n the drecton σ v If we assume that the ntal poston s a unformly dstrbuted random varable over the N possble rows, then 2 computng σ s easy -- just consult the probablty textbooks The computaton of σ 2 v can be derved n the same way -- determne the mnmum and maxmum possble veloctes -- then assume that velocty s unformly dstrbuted over the possble veloctes 9

Now we can compute the fltered state vector error covarance at tme 0: P 0 0 = E[ ( X k )( X k ) T ] = σ 2 " 0 % $ 2 # 0 σ v ' & Then P 1 0 can be computed from: P 1 0 = A 0 P 0 0 A 0 T + Q 0 Note that the above ndcates matrx multplcaton. For our tracker: " A 0 = 1 δt %... the state transton matrx # $ 0 1 &' and Q 0 = σ 2 " n 0 % 2 # $ 0 σ u &' (covarance of the nose processes) Other Track Intaton nformaton: (1) Set ˆ 0 0 to the frst acqured poston (2) The frst velocty estmate s ndetermnate (or can be set to a constant -- preferably) 10