THE Kalman filter (KF) rooted in the state-space formulation

Similar documents
Time-Varying Systems and Computations Lecture 6

Tracking with Kalman Filter

Parameter Estimation for Dynamic System using Unscented Kalman filter

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013

A Particle Filter Algorithm based on Mixing of Prior probability density and UKF as Generate Importance Function

Supporting Information

EEE 241: Linear Systems

Iterative General Dynamic Model for Serial-Link Manipulators

ECG Denoising Using the Extended Kalman Filtre EKF Based on a Dynamic ECG Model

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

STATISTICALLY LINEARIZED RECURSIVE LEAST SQUARES. Matthieu Geist and Olivier Pietquin. IMS Research Group Supélec, Metz, France

Lecture 10 Support Vector Machines II

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Application of Dynamic Time Warping on Kalman Filtering Framework for Abnormal ECG Filtering

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

Numerical Heat and Mass Transfer

Generalized Linear Methods

Relevance Vector Machines Explained

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

A Hybrid Variational Iteration Method for Blasius Equation

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Linear Approximation with Regularization and Moving Least Squares

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

Errors for Linear Systems

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Appendix B: Resampling Algorithms

Week 5: Neural Networks

ECE559VV Project Report

Lecture Notes on Linear Regression

Feature Selection: Part 1

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

The Order Relation and Trace Inequalities for. Hermitian Operators

CHAPTER 4 SPEECH ENHANCEMENT USING MULTI-BAND WIENER FILTER. In real environmental conditions the speech signal may be

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei

Online Classification: Perceptron and Winnow

Hidden Markov Models

A Robust Method for Calculating the Correlation Coefficient

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Kernel Methods and SVMs Extension

Uncertainty as the Overlap of Alternate Conditional Distributions

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

Uncertainty and auto-correlation in. Measurement

NUMERICAL DIFFERENTIATION

MMA and GCMMA two methods for nonlinear optimization

STAT 3008 Applied Regression Analysis

Discussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

The Expectation-Maximization Algorithm

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Mathematical Preparations

Composite Hypotheses testing

The Quadratic Trigonometric Bézier Curve with Single Shape Parameter

Linear Feature Engineering 11

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

IMPROVED STEADY STATE ANALYSIS OF THE RECURSIVE LEAST SQUARES ALGORITHM

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Chapter 11: Simple Linear Regression and Correlation

Identification of Linear Partial Difference Equations with Constant Coefficients

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Section 8.3 Polar Form of Complex Numbers

Multilayer Perceptron (MLP)

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A new Approach for Solving Linear Ordinary Differential Equations

Multigradient for Neural Networks for Equalizers 1

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

10-701/ Machine Learning, Fall 2005 Homework 3

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

Joseph Formulation of Unscented and Quadrature Filters. with Application to Consider States

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

4DVAR, according to the name, is a four-dimensional variational method.

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING

Chapter 6. Supplemental Text Material

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Difference Equations

Regularized Discriminant Analysis for Face Recognition

Multiple Sound Source Location in 3D Space with a Synchronized Neural System

Global Sensitivity. Tuesday 20 th February, 2018

Lecture 12: Classification

The Feynman path integral

Lecture 16 Statistical Analysis in Biomaterials Research (Part II)

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

BACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

ONLINE BAYESIAN KERNEL REGRESSION FROM NONLINEAR MAPPING OF OBSERVATIONS

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD

Markov Chain Monte Carlo Lecture 6

A Fast and Fault-Tolerant Convex Combination Fusion Algorithm under Unknown Cross-Correlation

6 Supplementary Materials

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Formulas for the Determinant

AP Physics 1 & 2 Summer Assignment

Transcription:

Proceedngs of Internatonal Jont Conference on Neural Networks, San Jose, Calforna, USA, July 31 August 5, 211 Extended Kalman Flter Usng a Kernel Recursve Least Squares Observer Pngpng Zhu, Badong Chen, and José C.Príncpe Abstract In ths paper, a novel methodology s proposed to solve the state estmaton problem combnng the extended Kalman flter (EKF) wth a kernel recursve least squares (KRLS) algorthm (EKF-KRLS). The EKF algorthm estmates hdden states n the nput space, whle the KRLS algorthm estmates the measurement model. The algorthm works well wthout knowng the lnear or nonlnear measurement model. We apply ths algorthm to vehcle trackng, and compare the performances wth tradtonal Kalman flter, EKF and KRLS algorthms. Results demonstrate that the performance of the EKF-KRLS algorthm outperforms these exstng algorthms. Especally when nonlnear measurement functons are appled, the advantage of the EKF-KRLS algorthm s very obvous. I. INTRODUCTION THE Kalman flter (KF) rooted n the state-space formulaton of lnear dynamcal systems, provdes a recursve soluton to the lnear optmal flterng problem, whch was frst gven by Kalman [1. It apples to statonary as well as to nonstatonary lnear dynamcal envronments. However, the applcaton of the KF to nonlnear system can be dffcult. Some extended algorthms have been proposed to solve ths problem, such as the extended Kalman flter (EKF)[3-[6 and the unscented Kalman flter (UKF)[6-[9. All these algorthms requre the exact knowledge of the dynamcs n order to perform flterng. Specfcally, for the Kalman flter one has to know the transton matrces and measurement matrces, and the process nose and observaton nose are both consdered as zero mean Gaussan nose. For the EKF and UKF algorthms, one also has to know the transton functons and measurement functons. However, for many applcatons, these matrces, functons or assumptons about the nose cannot be obtaned easly or correctly. In ths paper, we stll assume that the transton matrx s known, but the measurement matrx or functon s assumed unknown. We learn t drectly from the real data. We construct the lnear state model n the nput space lke the Kalman flter, whle constructng the measurement model n a reproducng kernel Hlbert space (RKHS), a space of functons. We learn the measurement functon n the RKHS wth the estmated hdden states usng the kernel recursve least squares (KRLS) algorthm. Then we use the current estmated measurement functon and transton matrx to estmate the hdden states for the next step. The organzaton of the paper s as follows. In secton II, the tradtonal Kalman flter s revewed brefly. Next the Pngpng Zhu, Badong Chen and José C. Príncpe are wth the Department of Electrcal and Computer Engneerng, The Unversty of Florda, Ganesvlle, USA (emal: ppzhu, chenbd, prncpe@cnel.ufl.edu). Ths work was supported by NSF award ECCS 856441 KRLS algorthm s descrbed n secton III. In secton IV the EKF-KRLS algorthm s proposed, based on the Kalman flter and the KRLS algorthms. Two experments of vehcle trackng are studed to compare ths algorthm wth other exstng algorthms n secton V. Fnally, secton VI gves the dscusson and summarzes the concluson and future lnes of research. II. REVIEW OF KALMAN FILTERING The concept of state s fundamental n the Kalman flter. The state vector or smply state, denoted by x,sdefnedas the mnmal model that s suffcent to unquely descrbe the unforced dynamcal behavor of the system; the subscrpt denotes dscrete tme. Typcally, the state x s unobservable. To estmate t, we use a set of observed data, denoted by the vector y. The model can be expressed mathematcally as: x +1 = F x + w (1) y = H x + v (2) where F s the transton matrx takng the state x from tme to tme +1,andH s the measurement matrx. The process nose w s assumed to be zero-mean, addtve, whte, and Gaussan nose, wth the covarance matrx defned by { E[w wj T = S for = j (3) for =j Smlarly, the measurement nose v k s assumed to be zero-mean, addtve, whte, and Gaussan nose, wth the covarance matrx defned by { E[v vj T R for = j = (4) for =j Suppose that a measurement on a lnear dynamcal system, descrbed by (1) and (2) has been made at tme. The requrement s to use the nformaton contaned n the new measurement y to update the estmaton of the unknown hdden state x.letˆx denote a pror estmate of the state, whch s already avalable at tme. In the algorthm of Kalman flter, the hdden state x s estmated as the lnear combnaton of ˆx and new measurement y, n the form of ˆx = ˆx + K (y H ˆx ) (5) where matrx K s called Kalman gan. 978-1-4244-9637-2/11/$26. 211 IEEE 142

Algorthm 1: Kalman flter Intalzaton: For =, set ˆx = E[x P = E[(x E[x )(x E[x ) T Computaton: For =1, 2,..., compute: State estmate propagaton ˆx = F 1 ˆx 1 Measurement predcton ŷ = H ˆx Error covarance propagaton P = F 1 P 1 F T 1 + S 1 Kalman gan matrx [ H P HT K =P HT + R State estmate update ˆx = ˆx + K (y H ˆx ) Error covarance update P =(I K H )P. 1 The algorthm of Kalman flter s summarzed n Algorthm 1. The detals can be found n [2[4. Here, P and P are a pror covarance matrx and a posteror covarance matrx, respectvely, defned as P = E[(x ˆx )(x ˆx )T, (6) P = E[(x ˆx )(x ˆx ) T. (7) III. KERNEL RECURSIVE LEAST SQUARES ALGORITHM In ths secton we dscuss another flter algorthm, the kernel recursve least squares (KRLS) algorthm. The algorthm s a non-lnear kernel-based verson of the recursve least squares (RLS) algorthm proposed by Y. Engel[1. We consder a recorded sequence of nput and output samples D = {(x 1,y 1 ),...,(x,y )}, arsng from some unknown source. In the predcton problem, one attempts to fnd the best predctor ŷ for y gven D 1 {x }. In ths context, one s often nterested n on-lne applcatons, where the predctor s updated followng an arrval of each new sample. The KRLS algorthm assumes a functonal form, e.g. ŷ = f (x ) and mnmzes the cost functon J J = mn f y j f (x j ) 2 + λ f ( ). (8) j=1 In the reproducng kernel Hlbert space (RKHS) denoted by H, a space of functons, a functon f( ) s expressed as an nfnty dmensonal vector n H, denoted by w H, and evaluaton of the functon f(x) s expressed as the nner product between w and ϕ(x),whereϕ( ) maps x nto H[12- [15, f(x) = w ϕ(x) = w T ϕ(x). (9) So the cost functon s rewrtten as J = mn yj w T ϕ(x) 2 + λ w 2. (1) w j=1 The KRLS algorthm solves the cost functon (1) recursvely and estmate w as a lnear combnaton of {ϕ(x j )} j=1, where and w = a(j)ϕ(x j )=Φ a() (11) j=1 Φ =[ϕ(x 1 ),...,ϕ(x ) a() =[a(1),...,a() T. The KRLS algorthm s summarzed n Algorthm 2. The detals can be found n [1[11. Algorthm 2: Kernel Recursve Least Squares (KRLS) Intalze: Q(1) = (λ + k (x 1, x 1 )) 1, a(1) = Q(1)y 1 Iterate for >1 h() =[k (x, x 1 ),...,k(x, x 1 ) T z() =Q( 1)h() r() =λ + k (x[, x ) z() T h() Q() =r() 1 Q( 1)r()+z()z() T z() z() T 1 e() =y[ h() T a( 1) a( 1) z()r() a() = 1 e() r() 1 e() In ths paper, y and y are used to denote a scalar and a d 1 vector, respectvely. 1 d denotes a d 1 vector of ones n the next secton. IV. EKF-KRLS ALGORITHM Recallng the measurement model n the Kalman flter, we can extend t to a nonlnear model as y = h (x )+v. (12) If we map the hdden state x k nto a RKHS H whch contans the functon h( ), then (12) s expressed as y = h,ϕ(x ) H + v. (13) We reformulate the Kalman flter n two spaces: the hdden state model s stll n the nput space, whle the measurement model s constructed n the RKHS H. The whole system s reformulated as x +1 = F x + w (nput space) (14) y = h,ϕ(x ) H + v (H space) (15) Here we assume the transton matrx F s known, but the nonlnear functon h ( ) s unknown. The assumptons on nose w and v are the same as mentoned n secton II. Now, we revew the Kalman flter algorthm to derve the novel algorthm. Because the hdden state model s the same, the thngs we need to consder are the Kalman gan matrx K and the error covarance update equatons. In lght of [2, the Kalman gan s defned as K = E [ x +1 e T R 1 e, (16) where e = y ŷ = y h (ˆx ) and R e, = E[e e T.It s mportant to note that e and e() are dfferent. Accordng to the orthogonalty prncple, we have 143

R e, = E[e e T = E [( h (x ) h (ˆx )+v ) ( h (x ) h (ˆx )+v T ) [ (h = E (x ) h (ˆx ))( h (x ) h (ˆx )) T + E[v v T [ (h = E (x ) h (ˆx ))( h (x ) h (ˆx )) T + R (17) E [ x +1 e T = F E [ x e T [ + E w e T (18) wth the terms E [ [ x e T and E w e T gven by E [ x e T [( = E x ˆx + ˆx ) e T = E [( x ˆx ) e T snce ˆx et = E [( x ˆx )( h (x ) h (ˆx )) (19) snce ( x ˆx ) v, E [ w e T [ ( = E w h (x ) h (ˆx )+v ) T = E [ w v T snce w ( h (x ) h (ˆx )) T = assume w v. (2) Lke the error covarance P = E[(x ˆx )(x ˆx )T n [ the Kalman flter algorthm, we also need to construct (x E ˆx )( x ˆx ) T. We employ a frst-order Taylor approxmaton of the nonlnear functon h ( ) around ˆx. Specfcally, h (x ) s approxmated as follows h (x )=h (ˆx + ( x ˆx )) h (ˆx )+ h ( ˆx x ˆx ). (21) Wth the above approxmate expresson, we can approxmate K by substtutng from (17) to (18) nto (16), K P H [ T H P H 1 T + R (22) where H = h. The error covarance update equaton s ˆx the same wth the approxmated Kalman gan. Once H s estmated at tme, we can apply the Kalman flter algorthm to solve the predcton problem. Untl now the dervaton s very smlar to the EKF algorthm, except that the hdden state model s not nonlnear and the functon h ( ) s assumed unknown. In order to obtan H = h we wll ˆx use the KRLS algorthm mentoned n secton III to estmate the functon h ( ) based on the predcted hdden states ˆx j (j ). At every tme step, the EKF algorthm estmates state usng the current estmated functon ĥ( ), whle the KRLS algorthm estmates the unknown functon ĥ+1( ) usng all avalable estmated hdden states. We concatenate these two algorthms to obtan a novel algorthm, EKF-KRLS algorthm, summarzed n Algorthm 3. Algorthm 3: EKF-KRLS Intalze: For =,set ˆx = E[x, Φ =[k(ˆx, ) P = E[(x E[x )(x E[x ) T Q() = (λ + k(ˆx, ˆx )) 1, a() = 1 T d Computaton: For =1, 2,...,t For state flter: ˆx = F 1ˆx 1 ŷ = a( 1) T Φ T 1 ϕ(ˆx ) P = F 1 P 1 F T 1 + S 1 ( ) T Φ H = 1 ˆx a( 1) K =P H [ T H P H 1 T + R ˆx = ˆx + K (y ) ŷ ) P = (I K H P For KRLS flter: Φ =[ζφ 1,k(ˆx, ) h() =Φ T 1 ϕ(ˆx ) z() =Q( 1)h() r() [ =λ + k(ˆx, ˆx ) z() T h() Q() =r() 1 Q( 1)r()+z()z() T z() z() T 1 e() [ =y a( 1) T h() a( 1) r() a() = 1 z()e() T r() 1 e() T Lke the KRLS algorthm, Φ 1 s scaled by a forgettng factor ζ( ζ 1) to get Φ.The reason s that we estmate the measurement functon h ( ) based on the estmated hdden states {ˆx j } j=1. However, the hdden states are not trusted at the begnnng of flterng. So we use the forgettng factor to get rd of the mpact of wrong early estmates. In practce, we should also consder sparsfcaton and approxmate lnear dependency (ALD) [1[11 to restrct computatonal complexty. We can also consder dscardng some begnnng estmated states or just computng recent several estmated states n a runnng wndow. V. EXPERIMENTS AND RESULTS In ths paper we gve two experments to evaluate the EKF- KRLS algorthm: frst, a vehcle trackng problem wth lnear measurement model; second, a vehcle trackng problem wth nonlnear measurement model. The proposed algorthm, KF/EKF algorthm and KRLS algorthm are tested and evaluated to track the vehcle n a popular open survellance dataset, PETS21[16. Snce our goal s to evaluate the predcton performance of these algorthms, we mark manually the vehcle whch we want to track. The fgures below show the and the trajectory of the vehcle. Fg. 1. Trajectory of the vehcle wth background 144

y 4 39 38 37 36 35 34 33 32 x Fg. 2. Trajectory of the vehcle In Fg. 1 the red lne s the trajectory of the rght front lght of the vehcle. One can see that the vehcle travels straght frst, then backs up, and fnally parks. The trajectory s presented alone n Fg. 2. There are 41 n the survellance. Consderng that these algorthms have dfferent convergent tme, we compare ther performances from the 5th frame to the end. lnear measurement model: In ths experment our observatons are the vehcle postons P (x, y) n a Cartesan coordnate system. Kalman flter: The model s x +1 = F x + w (23) y = H x + v (24) where w and v are process nose and observaton nose wth covarances Q and R respectvely. For the vehcle trackng, we choose the Ramachandra s model. Each poston coordnaton of the movng vehcle s assumed to be descrbed by the followng equatons of moton: T 2 x +1 = x +ẋ T +ẍ 2 ẋ +1 = ẋ +ẍ T ẍ +1 = ẍ + a (25) where x, ẋ and ẍ are the vehcle poston, the vehcle velocty and the vehcle acceleraton at frame, T s the sample tme, and a s the plant nose that perturbs the acceleraton and accounts for both maneuvers and other modelng errors. a s assumed to be zero mean and of constant varance σa 2 and also uncorrelated wth ts values at other tme ntervals. One can fnd that wth a larger σa 2 we can obtan better performance when the acceleraton changes sharply, whle wth a smaller σa 2 we can obtan better performance when the acceleraton changes smoothly. So for a rch poston survellance, we have to choose a proper σa 2 to get the best performance n the sense of the whole survellance. We scan the parameter σa 2 and set σa 2 = 11 to obtan the best performance of the Kalman flter. Here we use the same σ 2 a for x-coordnaton and y-coordnaton. The lnear measurement equaton s wrtten as: y = H x + v (26) where y s the measured poston at frame, v s the random nose corruptng at frame and for each poston coordnaton of the movng vehcle the measurement matrx H =[ 1. The covarance of observaton nose s R = r I 2 (27) where I 2 denotes the 2 2 dentcal matrx. We set the covarance of measurement nose r =.1. KRLS: We use the KRLS algorthm to predct the next poston based on the prevous N postons. We choose the Gaussan kernel n the KRLS algorthm, defned as ( ) x y 2 k(x, y) =exp 2σ 2 (28) where σ s called kernel sze. Here the kernel sze s set as 5 through trals. The forgettng factor ς s.85. Forthe 2-D vehcle trackng, actually we use 2N data to predct the coordnates x and y of next poston respectvely. We set the parameter N as 6 to obtan the best performance of the KRLS algorthm. EKF-KRLS: For the EKF-KRLS algorthm we only need to know the transton matrx and functon h( ) can be learned from the data. We choose the same transton matrx F as the Kalman algorthm to track the vehcle. The EKF-KRLS algorthm learns the functon h( ) usng the KRLS algorthm wth the prevously estmated hdden states, and the hdden states themselves are estmated by the prevous functon h( ). The begnnng estmated hdden states are not trustable. Therefore, we need a forgettng factor < ς < 1 to make current hdden states more mportant wth larger weghts. We also use the runnng wndow to control the updatng of the functon h( ), whch means that we learn the current h( ) based on the only prevous m estmated hdden states. We choose these parameters as ς =.69 and m =35. We set the covarance of process nose as a zero matrx and the same covarance of measurement nose as the Kalman algorthm where r =.1. We also use the Gaussan kernel n the algorthm and the kernel sze s set as 2. These parameters are chosen to obtan the best performance. The performances of these three algorthms are presented n Fg. 3 and the errors of the 1-step predcton are plotted n Fg. 4. The mean square errors (MSE) are summarzed n TABLE I. TABLE I MSE FOR DIFFERENT ALGORITHMS Algorthm Kalman Flter KRLS EKF-KRLS MSE.42.4141.3742 145

r y 41 4 39 38 37 36 35 34 33 32 desred KRLS x Fg. 3. Trajectores of the true poston and the predctons of KF, KRLS and EKF-KRLS algorthms Fg. 4. error(db) 2 1 1 2 3 KRLS 4 Comparson of KF, KRLS and EKF-KRLS algorthms Here we use the same state model mentoned before and the nonlnear measurement functons (29) and (3). We frst generate the new data from the poston data usng (29) and (3). Then we predct the 1-step output of the data lke the frst experment. Fnally, we compare the performances of these three algorthms. EKF: Because the measurement functons are nonlnear, we use the orgnal EKF algorthm [3 nstead of the Kalman flter algorthm. The state transton matrx s the same matrx F. The covarance of process nose and the covarance of measurement nose are set as σ 2 a = 24 and r =.1, respectvely. KRLS: In ths experment we stll use the Gaussan kernel and set the kernel sze as 1. The parameter N and the forgettng fact are stll 6 and.85, respectvely. EKF-KRLS: We choose the same transton matrx F as the EKF algorthm and set these parameters as ς =.64 and m =35. We set the covarance of process nose as a zero matrx and the same covarance of measurement nose r =.1 as the EKF algorthm. We also choose the Gaussan kernel n the algorthm and set the kernel sze as 1. All parameters above are chosen to obtan the best performances. Because the ranges of the dstance r and the slope k are so dfferent, we compare ther performances separately. For the dstance r, the trajectores and errors are plotted n Fg. 5 and Fg. 6: From TABLE I, t s obvous that the EKF-KRLS algorthm has the best performance n ths experment when t uses the same transton matrx F as the Kalman flter algorthm. In order to compare these three algorthms statstcally, a Gaussan nose wth a constant varance σ 2 n s added to the orgnal data. We run these three algorthms 1 tmes on the nosy data and compare the means and standard devatons of the MSE. The results are summarzed n TABLE II. It shows that the EKF-KRLS algorthm has the best performances statstcally for the applcaton of vehcle trackng wth a lnear measurement model. 6 55 5 45 4 35 3 Fg. 5. desred KRLS Trajectores of the dstance r TABLE II MSE FOR DIFFERENT ALGORITHMS WITH NOISE Kalman Flter KRLS EKF-KRLS σn 2 =.1.4392 ±.91.4378 ±.68.411 ±.87 σn 2 =.2.4741 ±.132.4627 ±.111.4453 ±.132 6 4 2 KRLS Nonlnear measurement model: In ths experment our observatons are the dstance r between the vehcle and the orgn P (, ) and the slope k wth respect to the orgn. The dstance and the slope can be expressed as r = x 2 + y 2, (29) error of r(db) 2 4 6 8 1 k = y x. (3) Fg. 6. Errors of the dstance r 146

For the slope k the trajectores and errors are plotted n Fg. 7 and Fg. 8: 45 4 35 k 25 2 15 1 5 desred KRLS y 3 25 2 15 1 5 desred KRLS x Fg. 9. Trajectores of the true poston and the predctons of EKF, KRLS, and EKF-KRLS algorthms Fg. 7. Trajectores of the slope k 3 25 2 KRLS 15 error of k(db) 4 2 2 4 6 8 1 error(db) 1 5 5 1 15 2 12 14 KRLS Fg. 1. Comparson of EKF, KRLS, and EKF-KRLS algorthms 16 Fg. 8. Errors of the slope k TABLE III summarzes the predcton performances of the dstance r and the slope k. The results are the MSE from the 5th frame to the end. TABLE III MSE OF DISTANCE r AND SLOPE k Algorthm EKF KRLS EKF-KRLS dstance r.2218.3654.1715 slope k.141.1. In order to compare ther performances more correctly and vsually, we transform the dstance r and the slope k to the poston P (x, y) usng the below equatons: x = r 1+k 2, (31) y = kx. (32) The trajectores and errors are plotted n Fg. 9 and Fg. 1. The TABLE IV summarzes the predcton performances of these three algorthms. The results are the MSE between predcted poston and true poston from the 5th frame to the end. One can fnd that the EKF-KRLS algorthm has the best trackng performance and the advantage s very obvous. VI. DISCUSSION AND CONCLUSION The EKF-KRLS algorthm s presented n ths paper. We construct the state model n the nput space, as a lnear model. The transton matrx determnes the dmensonalty and the dynamcs of the hdden states. We construct the measurement model n the RKHS, whch s a space of functons. A nonlnear functon h( ) s expressed as a lnear combnaton of mapped hdden states. In other words, the measurement model s lnear n the RKHS, but s non-lnear from the nput space pont of vew. We connect these two models n dfferent spaces. Lke the EKF algorthm, we estmate the functon h( ) wth respect to hdden states and obtan the measurement matrx H. However, our algorthm s dfferent from the EKF algorthm. The reason s that for the EKF algorthm, the measurement functon h( ) should be known n advance, but t s not requred by the EKF-KRLS algorthm. Because the measurement functon for our algorthm can be learned drectly from the real data. The goal of the EKF-KRLS algorthm s to estmate the TABLE IV MSE OF POSITION Algorthm EKF KRLS EKF-KRLS MSE 1.3934 1.41.5637 147

output. The transton matrx s the desgn parameter, whch can be chosen to obtan the best performance. Snce the measurement functon h( ) s learned from the data, the choce of ths transton matrx s very mportant, whch reflects the dynamcs of the system. The experments of vehcle trackng n secton V show that the EKF-KRLS algorthm obtans sgnfcantly obvous advantage when the measurement functons are nonlnear. The reason s as follows. For the lnear measurement model, the Kalman flter s optmal and the desgned state model s very close to the real model. So the EKF-KRLS algorthm just outperforms the Kalman fler a lttle. For the nonlnear measurement model, the EKF s not optmal. It uses the Taylor frst order expanson of the fxed nonlnear measurement functon to approxmate the nonlnear functon. A lttle error n the lnear state model could be amplfed through the approxmated nonlnear model. Although the EKF-KRLS algorthm also uses the Taylor frst order expanson of the estmated nonlnear measurement functon, the nonlnear functon s not fxed. It s updated at every step. Therefore, the system can choose a better functon to compensate the error n the state model and the measurement model. The adaptvty of the EKF-KRLS algorthm results n ts obvous advantage over the other algorthms n the second experment. Actually, the KRLS algorthm s a specal case of the EKF- KRLS algorthm. Addng the nput u to the state model, we have x +1 = F x + G u + w (33) y = h (x )+v. (34) If we consder the nput u as zero-nput, we get our algorthm dscussed prevously. If we set the matrces F and G as 1 1 F =....... 1 G =. 1 m 1 m m (35) (36) the EKF-KRLS algorthm s a KRLS algorthm wth processng and measurement nose. Snce the state model s establshed n the nput space, there s no change for the algorthm, but replacng ˆx = F 1ˆx 1 wth ˆx = F 1ˆx 1 + G 1 u 1. In ths paper, we apply the EKF-KRLS algorthm to the vehcle trackng problem, whch can be modeled wth a lnear state model and a lnear measurement model. Furthermore, ths algorthm can also be used to solve the problem wth a lnear state model and a nonlnear measurement model. Even for those problems that the measurement model s unknown, ths algorthm stll works. However, when the transton matrx s unknown, how to choose proper transton matrx to obtan the best performance s stll an open queston whch needs more research n the future. REFERENCES [1 R. E. Kalman, A new approach to lnear flterng and predcton problem, Transactons of the ASME, Ser. D, Journal of Basc Engneerng, 82, pp. 34-45, 196. [2 A. Sayed, Fundamentals of Adaptve Flterng. New York: Wley, 23. [3 B. Anderson and J. Moore, Optmal Flterng. Prentce-Hall, 1979. [4 G. Welch and G. Bshop, An Introducton to the Kalman Flter, Techncal report, UNC-CH Computer Scence Techncal Report 9541,, 1995. [5 J. K. Uhlman, Algorthms for multple target trackng, Amercan Scentst, vol. 8(2), pp. 128-141, 1992. [6 S. Haykn, Kalman Flterng and Networks. New York: Wley, 21. [7 S. J. Juler and J. K. Uhlmann, A New Extenson of the Kalman Flter to Nonlnear Systems, Proc. of AeroSense: The 11th Int. Symp. on Aerospace/Defence Sensng, Smulaton and Controls., 1997. [8 E. A. Wan, Rudolph van der Merwe, and Alex T. Nelso, Dual Estmaton and the Unscented Transformaton, Advances n Neural Informaton Processng Systems, no. 12, pp. 666-672, MIT Press, 2. [9 E. A. Wan and R. van der Merwe, The Unscented Kalman Flter for Nonlnear Estmaton, Proc. of IEEE Symposum 2 (AS-SPCC), Lake Louse, Alberta, Canada, Oct. 2. [1 Y. Engel, S. Mannor, and R. Mer, The kernel recursve least-squares algorthm, IEEE Transactons on Sgnal Procssng, vol. 52, no. 8, pp. 2275-2285, Aug. 24. [11 W. Lu, Il Park, Y. Wang, and J. C. Prncpe, Extended Kernel Recursve Least Squares Algorthm, IEEE Transactons on Sgnal Procssng, vol. 57, no. 1, pp. 381-3814, Oct. 29. [12 A. Berlnet and C. Thomas-Agnan, Reproducng Kernel Hlber Spaces n Probablty and Statstcs. Kluwer Academc Publsher, 24. [13 N. Aronszajn, Theory of reproducng kernels, Trans. Amer. Math. Soc., vol. 68, pp.337-44, 195 [14 T. Hofmann, B. Scholkopf, A. J. Smola, A Revew of Kernel Methods n Machne Learnng, Techncal Report 156, Max-Planck-Insttut fur bologsche Kybernetk, 26 [15 C. J. C. Burges, A tutoral on support vector machnes for pattern recognton, Data Mn. Knowl. Dscow., vol. 2, no. 2, pp. 121-167, 1998. [16 Avalable: http://www.htech-projects.com/euprojects/cantata/dataset cantata/dataset.html 148