x n C l 1 u n C m 1 y n C p 1 v n C p 1

Similar documents
Extended Kalman Filter Derivation Example application: frequency tracking

Change in Notation. e (i) (n) Note that this is inconsistent with the notation used earlier, c k+1x(n k) e (i) (n) x(n i) ˆx(n i) M

x(n) = a k x(n k)+w(n) k=1

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding

Spectrograms Overview Introduction and motivation Windowed estimates as filter banks Uncertainty principle Examples Other estimates

J. McNames Portland State University ECE 223 DT Fourier Series Ver

Introduction. Spectral Estimation Overview Periodogram Bias, variance, and distribution Blackman-Tukey Method Welch-Bartlett Method Others

Overview of Discrete-Time Fourier Transform Topics Handy Equations Handy Limits Orthogonality Defined orthogonal

Core Concepts Review. Orthogonality of Complex Sinusoids Consider two (possibly non-harmonic) complex sinusoids

J. McNames Portland State University ECE 223 Sampling Ver

Overview of Sampling Topics

Solutions for examination in TSRT78 Digital Signal Processing,

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

Windowing Overview Relevance Main lobe/side lobe tradeoff Popular windows Examples

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Statistics 910, #15 1. Kalman Filter

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

Lecture 7: Linear Prediction

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on

Overview of Bode Plots Transfer function review Piece-wise linear approximations First-order terms Second-order terms (complex poles & zeros)

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Assignment 4 Solutions Continuous-Time Fourier Transform

ω (rad/s)

Notes perso. - cb1 1. Version 3 - November u(t)e i2πft dt u(t) =

From Fourier Series to Analysis of Non-stationary Signals - X

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

Least Squares Estimation Namrata Vaswani,

Least Squares and Kalman Filtering Questions: me,

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

Chapter 2 Wiener Filtering

Statistics Homework #4

The Z-Transform. For a phasor: X(k) = e jωk. We have previously derived: Y = H(z)X

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

A Tutorial on Recursive methods in Linear Least Squares Problems

Solutions. Chapter 5. Problem 5.1. Solution. Consider the driven, two-well Duffing s oscillator. which can be written in state variable form as

12. Prediction Error Methods (PEM)

Statistical and Adaptive Signal Processing

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

HEAGAN & CO., OPP. f>, L. & W. DEPOT, DOYER, N. J, OUR MOTTO! ould Iwv ia immediate vltlui. VEEY BEST NEW Creamery Butter 22c ib,

INFINITE-IMPULSE RESPONSE DIGITAL FILTERS Classical analog filters and their conversion to digital filters 4. THE BUTTERWORTH ANALOG FILTER

Asymptotic Statistics-VI. Changliang Zou

1 Kalman Filter Introduction

Adaptive Filter Theory

CMSC 426 Problem Set 2

Lecture Note 12: Kalman Filter

Gaussian, Markov and stationary processes

Fourier Methods in Digital Signal Processing Final Exam ME 579, Spring 2015 NAME

EE363 homework 2 solutions

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm

Advanced Digital Signal Processing -Introduction

A Gentle Introduction to Gradient Boosting. Cheng Li College of Computer and Information Science Northeastern University

LAB 2: DTFT, DFT, and DFT Spectral Analysis Summer 2011

Notes on Time Series Modeling

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

2D Plotting with Matlab

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

Derivation of the Kalman Filter

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

Conditions for successful data assimilation

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

EEO 401 Digital Signal Processing Prof. Mark Fowler

11. Further Issues in Using OLS with TS Data

Assignment 6, Math 575A

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

RECURSIVE ESTIMATION AND KALMAN FILTERING

Final Exam January 31, Solutions

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

Data assimilation with and without a model

On corrections of classical multivariate tests for high-dimensional data

TSRT14: Sensor Fusion Lecture 8

High-resolution Parametric Subspace Methods

Classic Time Series Analysis

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Lecture 7: Optimal Smoothing

Statistics 910, #5 1. Regression Methods

Recursive Generalized Eigendecomposition for Independent Component Analysis

Open Economy Macroeconomics: Theory, methods and applications

HOMEWORK 3: Phase portraits for Mathmatical Models, Analysis and Simulation, Fall 2010 Report due Mon Oct 4, Maximum score 7.0 pts.

Understanding and Application of Kalman Filter

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

ECE531 Lecture 10b: Dynamic Parameter Estimation: System Model

Chapter 2 Time-Domain Representations of LTI Systems

Chapter 10. Timing Recovery. March 12, 2008

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

Factor Analysis and Kalman Filtering (11/2/04)

EE266 Homework 3 Solutions

Assignment 2 Solutions Fourier Series

EE363 homework 5 solutions

MATH 552 Spectral Methods Spring Homework Set 5 - SOLUTIONS

Name of the Student: Problems on Discrete & Continuous R.Vs

Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A.

Transcription:

Kalman Filter Derivation Examples Time and Measurement Updates x u n v n, State Space Model Review x n+ = F n x n + G n u n x u k v k y n = H n x n + v n Π = Q n δ nk S n δ nk Snδ nk R n δ nk F n C l l G n C l m H n C p l x n C l u n C m y n C p v n C p u n, x k = v n, x k = for n k u n, y k = v n, y k = for n>k u n, y k = S n v n, y k = R n for n = k u n, x = iff u n, x n = for n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Key Linear Estimation Properties Linearity of Linear MMSE Estimators for any two matrices A and A The Orthogonality Condition (A x + A x )=A ˆx + A ˆx ˆx y,y = ˆx y + ˆx y if and only if y y (R y y =) State Covariance Recursion Recall that Π n+ x n+, x n+ = F n x n + G n u n,f n x n + G n u n = F n x n, x n Fn + G n u n, u n G n = F n Π n Fn + G n Q n G n where the initial value Π is usually specified by the user. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Innovations Let us begin by just using the linear process model and the definition of the innovations y n = H n x n + v n The estimator notation used here is e n y n ŷ n n ŷ n k = the linear MMSE estimator of y n given {y,...,y k } Our goal is to come up with a recursive formulation for linear estimators of x n for n =,,... Our motivation for using the innovations is that they have a diagonal covariance matrix Recall that calculating the innovations is equivalent to applying a whitening filter This has become a theme for this class Whiten before processing Simplifies the subsequent step of estimation Recursion for Innovations e n y n ŷ n n y n = H n x n + v n ŷ n n = H n ˆx n n + ˆv n n = H n ˆx n n e n = y n H n ˆx n n = H n x n + v n H n ˆx n n = H n (x n ˆx n n )+v n = H n x n n + v n where the new notation is defined as x n n x n ˆx n n ỹ n n y n ŷ n n = e n Note that our goal of finding an expression for the innovations reduces to finding a way to estimate the one-step predictions of the state vector, ˆx n n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. State Estimation for Innovations ˆx n+ n = x n+, col{y,...,y n } col{y,...,y n } col{y,...,y n } = x n+, col{e,...,e n } col{e,...,e n } col{e,...,e n } n = x n+, e k R k= where we have defined e,k e k R e,k e k =E[e k e k] The first equality is just the solution to the normal equations The second equality follows from the definition of the innovations and equivalence of the linear spaces spanned by the observations and innovations L{y,...,y n } = L{e,...,e n } The third equality follows from the orthogonality property State Estimation for Innovations n ˆx n+ n = x n+, e k R k= e,k e k This expression assumes the error covariance matrix is invertible (positive definite) R e,k > This is a nondegeneracy assumption on the process {y,...,y n } It essentially means that no variable y n can be exactly estimated by a linear combination of earlier observations This does not require that R n = v n > It s quite possible that R e,k > even if R n = However, if R n >, we know for certain that R e,k > J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

Circular Reasoning? In order to calculate the innovations e n y n ŷ n n = y n H n ˆx n n we need to estimate the one step predictions of the state. Yet in order to obtain these estimates, n ˆx n+ n = x n+, e k R k= e,k e k we need the innovations! Luckily, we only need the previous innovations to estimate the one step state predictions Suggests a recursive solution ˆx n+ n = = Recursive State Estimation n x n+, e k R k= ( n e,k e k x n+, e k R k= e,k e k ) + x n+, e n Re,ne n = ˆx n+ n + x n+, e n Re,n ( ) yn H n ˆx n n This is almost what we need However, there are still some missing pieces Can we express ˆx n+ n in terms of what is known at time n: ˆx n n and e n? How do we obtain an expression for the error covariance matrix in terms of the model parameters? The only way to make any further progress is to use the state update equation that relates x n+ to x n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. State Prediction x n+ = F n x n + G n u n ˆx n+ n = F n ˆx n n + G n û n n = F n ˆx n n Now given ˆx n+ n = ˆx n+ n + x n+, e n Re,n ( ) yn H n ˆx n n we can set up a proper recursion that we initialize with e = y where we have defined e n = y n H n ˆx n n ˆx n+ n = F n ˆx n n + K p,n e n K p,n x n+, e n R e,n The subscript p indicates that this gain is used to update a predicted estimator of the state, ˆx n+ n This is one of the Kalman gains that we will use Defining the State Error Covariance x n n x n ˆx n n P n n x n n =E [ ] x n n x n n We still need to solve for K p,n and R e,n in terms of the known state space model parameters To do so it will be useful to introduce yet another covariance matrix P n n is the state error covariance matrix It would be great to know this anyway Gives us a means of knowing how accurate our estimates are If {x, u,...,u n, v,...,v n } are jointly Gaussian (central limit theorem) we could construct exact confidence intervals on our estimates! J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Solving for the Innovations Covariance e n = y n H n ˆx n n = H n x n + v n H n ˆx n n = H n (x n ˆx n n )+v n = H n x n n + v n Then it follows immediately that R e,n = e n, e n = H n x n n + v n,h n x n n + v n = H n x n n, x n n Hn + v n, v n = H n P n n Hn + R n Thus we have replaced the problem of needing an expression for R e,n with the problem of needing an expression for P n n Solving for the Kalman Prediction Gain K p,n x n+, e n R e,n We have an expression for R e,n, but we still need an expression for the cross-covariance in terms of the state space model parameters x n+, e n = F n x n + G n u n, e n = F n x n, e n + G n u n, e n x n, e n = x n,h n x n n + v n = x n, x n n Hn + x n, v n = x n, x n n Hn = ˆx n n + x n n, x n n H n = ˆx n n, x n n H n + x n n, x n n H n = x n n, x n n H n = P n n H n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Solving for the Kalman Prediction Gain Continued K p,n x n+, e n Re,n x n+, e n = F n x n, e n + G n u n, e n x n, e n = P n n Hn We still need to solve for the second term Thus, u n, e n = u n,h n x n n + v n = u n, x n n Hn + u n, v n = u n, x n ˆx n n Hn + u n, v n = u n, v n = S n K p,n x n+, e n R e,n = ( F n P n n H n + G n S n ) R e,n The only thing we still need is an expression for is P n n Solving for the State Error Covariance Let us find a recursive expression for the residuals x n n that we can then use to solve for P n n x n+ n = x n+ ˆx n+ n =(F n x n + G n u n ) ( ) F n ˆx n n + K p,n e n ( ) ( ) = F n xn ˆx n n + Gn u n K p,n Hn x n n + v n =(F n K p,n H n ) x n n + [ ] [ ] u G n K n p,n v n = F p,n x n n + [ ] [ ] u G n K n p,n v n Then P n+ n x n+ n, x n+ n = F p,n P n n Fp,n + [ ] [ ][ ] Q G n K n S n G n p,n Sn R n Kp,n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Solving for the State Error Covariance Continued P n+ n x n+ n, x n+ n = F p,n P n n Fp,n + [ ] [ ][ Q G n K n S n G n p,n Sn R n K p,n = F p,n P n n Fp,n + [ ] [ Q G n K n G n S n Kp,n p,n SnG n R n Kp,n = (F n K p,n H n )P n n (F n K p,n H n ) +G n Q n G n G n S n K p,n K p,n S ng n + K p,n R n K p,n = F n P n n F n K p,n H n P n n F n F n P n n H nk p,n + K p,n H n P n n H nk p,n +G n Q n G n G n S n K p,n K p,n S ng n + K p,n R n K p,n ] ] Solving for the State Error Covariance Continued P n+ n = F n P n n Fn + G n Q n G ( n + K p,n Hn P n n Hn ) + R n K p,n ( K p,n Hn P n n Fn + SnG ) n ( F n P n n Hn ) + G n S n K p,n R e,n = H n P n n H n + R n K p,n = ( F n P n n H n + G n S n ) R e,n P n+ n = F n P n n F n + G n Q n G n + K p,n R e,n K p,n K p,n R e,n K p,n K p,n R e,n K p,n = F n P n n F n + G n Q n G n K p,n R e,n K p,n The only remaining question is how to start the recursion P = x ˆx = x =Π J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Covariance form of the Kalman Filter User provides the state space model parameters. Initialization Recursions ˆx = P =Π e n = y n H n ˆx n n R e,n = H n P n n H n + R n K p,n = ( F n P n n H n + G n S n ) R e,n ˆx n+ n = F n ˆx n n + K p,n e n Kalman Filter Features Requires only O(l ) operations per update if m l and p l To obtain ˆx N N requires a total of O(Nl ) operations two orders of magnitude less than solving the normal equations directly! Gives the error covariance matrices Easily obtain standard deviation of the error Approximate confidence intervals Estimates the state at all times {,...,N} Includes a whitening filter as part of the computations P n+ n = F n P n n F n + G n Q n G n K p,n R e,n K p,n = F n P n n F n + G n Q n G n K p,n (H n P n n F n + S ng n) J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Example : Accelerometer Suppose we wish to have a random walk model of an the position of an object and we have recordings from an accelerometer to estimate it s position. Suppose the accelerometer is sampled at a rate of f s = Hz. Our statistical model of the process is then ṗ(t) =u(t) y(t) = d p(t) dt + v(t) If we approximate the derivative in discrete time with dx(t) x(n) x(n ) dt t=nts T s d x(t) x(n) x(n ) + x(n ) dt t=nts Ts Example : Discrete-Time State Space Model Then the discrete-time state space model of the process becomes p(n +) p(n) p(n) = p(n ) + u(n) p(n ) p(n ) y(n) = [ ] p(n) p(n ) Ts + v(n) p(n ) Which is in our standard form. x n+ = F n x n + G n u n y n = H n x n + v n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Measured Acceleration (m/s ) Position (m) Example : Example Plots f:. q:. r:... 8 8 Position (m)... Example : Example Plots f:. q:. r:. Actual Position Estimated Position Naive Estimate. 7 8 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Position (m).... Example : Example Plots f:. q:. r:. Actual Position Estimated Position Naive Estimate 7 8 9 Position (m)... Example : Example Plots f:. q:. r:. Actual Position Estimated Position Naive Estimate 7 8 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Example : Example Plots f:.99 q:. r:. Example : Example Plots f:.99 q:. r:.. Actual Position Estimated Position Naive Estimate.. Actual Position Estimated Position Naive Estimate Position (m).. Position (m)..... 7 8 9 7 8 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

. Example : Example Plots f:.99 q:. r:. Actual Position Estimated Position Naive Estimate Example : MATLAB Code clear all; close all; % Parameters fs = ; % Sample rate (Hz) T = ; % Duration of recording (seconds) pnv =.; % Process noise variance (standard deviation of step size per second) mnv = ; % Measurement noise variance (continuous time) fds = [.99.]; % Forgetting factors Position (m). 7 8 9 % Preprocessing Ts = /fs; % Sampling interval (s) N = round(fs*t); % No. samples for c=:length(fds), fd = fds(c); % State Model Parameters F = [fd ; ; ]; G = [;;]; H = [,-,]/Ts.^; Q = Ts*pnv; R = mnv; S = ; for c=:, J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. % Process Synthesis (simulation) x = zeros(,); p = zeros(n,); % Actual position y = zeros(n,); % Measured acceleration mn = sqrt( mnv)*randn(n,); % Measurement noise pn = sqrt(ts*pnv)*randn(n,); % Process noise for n = :N, p(n) = x(); % Actual position y(n) = H*x + mn(n); % Measured acceleration x = F*x + G*pn(n); % Update the state vector % Estimate the Position with the KF xh = zeros(,); ph = zeros(n,); % Estimated position phv = zeros(n,); % Estimated error variance P = e-*eye(); phv() = P(,); for n=:n-, % Calculations for time n yh(n)= H*xh; % Predicted value of the signal e(n) = y(n) - yh(n); % Calculate the innovation at time n Re(n) = R + H*P*H ; % Update innovation covariance Kp = (F*P*H + G*S)*inv(Re(n)); % Kalman filter (prediction) coefficients % Calculations for time n+ xh = F*xh + Kp*e(n); % Predicted state at time n+ n+ P = F*P*F + G*Q*G - Kp*Re(n)*Kp ; % Calculate state error covariance matrix if eig(p)<, warning( Error covariance matrix is negative definite. ); % Save Info ph (n+) = xh(); phv(n+) = P(,); ph = Ts^*cumsum(cumsum(y)); % Plot the Position and Acceleration s figure; FigureSet(, LTX ); k = (:N). ; t = (k-.)/fs; z = ones(n,); phs = phv.^(/); ucb = norminv(.97,ph,phs); % Upper confidence band lcb = norminv(.,ph,phs); % Lower confidence band h = patch([t;flipud(t)],[ucb;flipud(lcb)], k ); set(h, LineStyle, None ); set(h, FaceColor,[.9.9.]); hold on; h = plot(t,p,z, g,t,ph,z, b,t,ph,z, r ); set(h, LineWidth,.8); hold off; view(,9); box off; xlim([ T]); ylim([min([lcb;p;ph]) max([ucb;p;ph])]); AxisLines; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

ylabel( Position (m) ); xlabel( ); title(sprintf( $f$:%.f \\hspace{em} $q$:%.f \\hspace{em} $r$:%.f,fd,mnv,pnv)); legend(h, Actual Position, Estimated Position, Naive Estimate ); print(sprintf( AccelerometerEstimate%d-%d,c,round(fd*)), -depsc ); print( Accelerometers, -depsc ); % Plot the Position and Acceleration s figure; FigureSet(, LTX ); k = :N; t = (k-.)/fs; subplot(,,); h = plot(t,p, g ); set(h, LineWidth,.); box off; ylabel( Position (m) ); title(sprintf( $f$:%.f \\hspace{em} $q$:%.f \\hspace{em} $r$:%.f,fd,mnv,pnv)); xlim([ T]); ylim([min(p) max(p)]); AxisLines; subplot(,,); k = :N-; h = plot(t,y, b ); set(h, LineWidth,.); box off; xlabel( ); ylabel( Measured Acceleration (m/s$^$) ); xlim([ T]); ylim([min(y) max(y)]); AxisLines; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Example : AR Modeling of an ARMA Process Use the Kalman filter to build an AR model of the ARMA process used in examples throughout ECE 8/8. Use a random walk state space model for the parameter vector with various measurement/process noise ratios. Q n = qi R n = ri λ = q/r. Example : Pole-Zero Map... J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Example : Process Output Example : Process PSD Length: PSD x True PSD.......... PSD 7 8 9 Sample Index.......... Frequency (cycles/sample) J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Example : Nonparametric Spectrogram Example : Estimated PSD. 7 8. l= q=e-7 r=.e+ ρ =f d = 7 8................ 7 8 7 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Example : Estimated PSD l= q=e- r=.e+ ρ =f d = 7 8 Example : Estimated PSD l= q=. r=.e+ ρ =f d = 7 8................ 7 8 7 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Example : Estimated PSD l= q=.r=.e+ ρ =f d = 7 8 Example : Estimated PSD l= q=.r=.e+ ρ =f d = 7 8................ 7 8 7 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Example : Estimated PSD Example : MATLAB Code l= q=e+ r=.e+ ρ =f d = 7 8 clear; close all; % User-Specified Parameters fs = ; % Sample rate (Hz).. N = ; % Number of observations from the process Np = ; % Number of samples to throw away to account for transient.... b = poly([-.8,.97*exp(j *pi/),.97*exp(-j *pi/),....97*exp(j *pi/),.97*exp(-j *pi/)]); % Numerator coefficients a = poly([.8,.9*exp(j*.*pi/),.9*exp(-j*.*pi/),....9*exp(j*.*pi/),.9*exp(-j*.*pi/)]); % Denominator coefficients b = b*sum(a)/sum(b); % Scale DC gain to l = length(a)-; % Order of the syste, m = l; % Dimension of the state/process noise p = ; % Dimension of the observations (y). 7 8. % Plot the Pole-Zero Map figure FigureSet(, LTX ); h = Circle; z = roots(b); p = roots(a); hold on; h = plot(real(z),imag(z), bo,real(p),imag(p), rx ); hold off; axis square; xlim([-..]); J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. ylim([-..]); AxisLines; box off; print -depsc ProcessPZMap; % Plot n = ; w = randn(np+n,); x = filter(b,a,w); % System with known PSD nx = length(x); y = x(nx-n+:nx); % Eliminate start-up transient (make stationary) y = y; % Make unit variance figure; FigureSet(, LTX ); k = :N-; h = stem(k,y); set(h, Marker,. ); set(h, LineWidth,.); box off; ylabel( ); xlabel( Sample Index ); title(sprintf( Length:%d,n)); xlim([ n]); ylim([min(y) max(y)]); print -depsc ProcessOutput; % Plot True PSD figure; FigureSet(, LTX ); [R,w] = freqz(b,a,^); R = abs(r).^; f = w/(*pi); subplot(,,); h = plot(f,r, r ); set(h, LineWidth,.); box off; ylabel( PSD ); title( True PSD ); xlim([.]); ylim([ max(r)*.]); subplot(,,); h = semilogy(f,r, r ); set(h, LineWidth,.); box off; ylabel( PSD ); xlim([.]); ylim([.*min(r) max(r)*]); xlabel( Frequency (cycles/sample) ); print -depsc ProcessPSD; % Spectrogram NonparametricSpectrogram(y,fs,,[],^9,); FigureSet(, LTX ); print( KFARSpectrogram, -depsc ); % Kalman Filter Estimates q = [e-7 e- e- e- e- e]; for c=:length(q), figure; clf; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

KalmanFilterAutoregressive(y,fs,,[q(c) var(y) ]); close; FigureSet(, LTX ); print(sprintf( KFARQ%d,round(log(q(c)))), -depsc ); Example : MATLAB Code Continued function [A,yh] = KalmanFilterAutoregressive(y,fsa,na,vra,fda,sfa,pfa); %KalmanFilterAutoregressive: Adaptive autoregressive model % % [A] = KalmanFilterAutoregressive(y,fs,n,vr,fd,sf,pf); % % y Input signal. % fs Sample rate (Hz). Default = Hz. % n Model order. Default =. % vr Vector of process, measurement, and initial state variances. % Default = [.*var(y) var(y) ]. % fd State transition matrix diagonal. Default =.. % sf Smoothing flag: =none (default), =smooth. % pf Plot flag: =none (default), =screen. % % A Matrix containing the parameters versus time. % yh Predicted estimates of the output. % % Uses a random walk state update model with a constant-diagonal % covariance matrices for the initial state, measurement noise, % and process noise. This implementation uses the casual version % of the Kalman filter. It is not robust or computationally % efficient. The input signal must be scalar (a vector). % % When smoothing is applied, uses the Bryson-Frazier formulas as % given in Kailath et al. % % Example: Generate the parametric spectrogram of an intracranial % pressure signal using a Blackman-Harris window that is s in % duration. % % load ICP.mat; % icpd = decimate(icp,); % KalmanFilterAutoregressive(icpd,fs/); % J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. % T. Kailath, A. H. Sayed, B. Hassibi, "Linear Estimation," % Prentice Hall,. % % Version. JM % % See also Lowpass, ParametricSpectrogram, and KernelFilter. % Error Checking if nargin<, help KalmanFilterAutoregressive; return; % Author-Specified Parameters Nf = ^9; % Zero padding to use for PSD estimation Nt = ; % No. times to evaluate the parametric spectrogram % Process Function Arguments fs = ; if exist( fsa, var ) & ~isempty(fsa), fs = fsa; l = ; % Number of parameters (the letter l, not the number ) if exist( na, var ) && ~isempty(na), l = na; q =.*var(y); % Process noise variance r = var(y); % Measurement noise variance p = e-; % Initial state variance if exist( vra, var ) && ~isempty(vra), q = vra(); r = vra(); p = vra(); fd = ; % State transition matrix diagonal if exist( fda, var ) && ~isempty(fda), fd = fda; sf = ; % Smoothing flag. Default = no smoothing. if exist( sfa ) & ~isempty(sfa), sf = sfa; pf = ; % Default - no plotting if nargout==, % Plot if no output arguments pf = ; if exist( pfa ) & ~isempty(pfa), pf = pfa; % Preprocessing y = y(:); ny = length(y); % No. samples m = l; % Dimension of the state/process noise = dimension of the state p = ; % Dimension of the observations (y) F = fd; % State transition matrix (F) G = ; % Process noise matrix J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

% Variable Initialization Q = q*eye(m,m); % Covariance matrix for state/process noise R = r*eye(p,p); % Covariance matrix for observation noise S = zeros(m,p); % Cross-covariance matrix between measurement and observation noise P = p*eye(l,l); % Initial state (x()) covariance matrix H = zeros(p,l); % Initial value of H(i) at i= xh = zeros(l,); % Initial value of estimated state at time i= P = P; X = zeros(l,ny); % Estimates of the state at all times K = zeros(l,ny); % Kalman gains Px = zeros(l,l,ny); % State error covariances yh = zeros(ny,); % Predicted outputs e = zeros(ny,); % Innovations % Forward Recursions for n=:ny, % Calculations for time n id = n - (:l); % Indices of observed signal to stuff in H(n) iv = find(id>); % Indices that are non-negative H = zeros(,l); H(iv) = y(id(iv)); % Update output matrix yh(n)= H*xh; % Predicted value of the signal e(n) = y(n) - yh(n); % Calculate the innovation at time n Re(n) = R + H*P*H ; % Update innovation covariance Kp = (F*P*H + G*S)*inv(Re(n)); % Kalman filter (prediction) coefficients % Calculations for time n+ xh = F*xh + Kp*e(n); % Predicted state at time n+ n+ P = F*P*F + G*Q*G - Kp*Re(n)*Kp ; % Calculate state error covariance matrix if eig(p)<, warning( Error covariance matrix is negative definite. ); % Save Info X(:,n) = xh; % Store estimate at time n+ K(:,n) = Kp; Px(:,:,n) = P; % Store the state error covariance % Backward Recursions (Bryson-Frazier Formulas) if sf==, Ps = zeros(l,l); % Smoothed state error covariance matrix L = zeros(l,l); la = zeros(l,); for n=ny:-:, % Calculations for time i id = (n-) - (:l-); % Indices of the observed signal to stuff in H(n) iv = find(id>); H = zeros(,l); H(iv) = y(id(iv)); % Calculate the output matrix H Pp = Px(:,:,n); % Recall the predicted state error covariance Fp = F - K(:,n)*H; xh = X(:,n) + Pp*Fp *la; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. la = Fp *la + H *inv(re(n))*e(n); % Save Info X(:,n) = xh; % Store estimate at time n+ yh(n) = H*xh; % Store predicted estimate at time n % Post-processing A = X; % Plotting if pf>=, if pf~=, figure; FigureSet(); colormap(jet()); else figure(); Pf = zeros(l,nt); % Frequencies of the poles Ry = zeros(nf,nt); % Memory allocation for spectral estimate k = round(linspace(,ny,nt)); % Sample indices to evalute spectral estimate at for c=:nt, Ry(:,c) = abs(freqz(,[;-a(:,k(c))],nf,fs)); rts = roots([;-a(:,k(c))]); % Obtain the roots of the polynomial Pf(:,c) = sort(angle(rts)*fs/(*pi)); % Store the sorted frequencies t = (k-.)/fs; % Times of estimate tl = ; % Default label of time axis if t(end)>, t = t/; tl = ; f = (:Nf-)*fs/(*Nf); % Frequencies of estimate % Spectrogram ha = axes( Position,[...8.9]); s = reshape(abs(ry),nf*nt,); p = [ prctile(s,97)]; imagesc(t,f,ry,p); xlim([ t(end)]); ylim([ f(end)]); set(ha, YAxisLocation, Right ); set(ha, XAxisLocation, Top ); set(ha, YDir, normal ); AxisSet; title(sprintf( $\\ell$=%d $q$=%.g $r$=%.g $\\rho_$=%.g $f_d$=%.g,l,q,r,p,fd)); % Colorbar ha = axes( Position,[....9]); colorbar(ha); set(ha, Box, Off ) set(ha, YTick,[]); % Power Spectral Density ha = axes( Position,[.8...9]); psd = mean((ry.^). ); J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

h = plot(psd,f, r ); %,[Smin Smax]); set(h, LineWidth,.8); ylim([ f(end)]); xlim([.*max(psd)]); ylabel( ); set(ha, XTick,[]); % ha = axes( Position,[...8.]); h = plot(t,y(k)); set(h, LineWidth,.8); ymin = min(y); ymax = max(y); yrng = ymax-ymin; ymin = ymin -.*yrng; ymax = ymax +.*yrng; xlim([ t(end)]); ylim([ymin ymax]); xlabel(tl); ylabel( ); axes(ha); % Spectrogram with Poles % rg = prctile(reshape(ry,prod(size(ry)),),[,98]); % h = imagesc(t,f,ry,rg); % % hold on; % % h = plot(t,pf, k. ); % % set(h, MarkerSize,); % % h = plot(t,pf, w. ); % % set(h, MarkerSize,); % % hold off; % set(gca, YDir, Normal ); % xlabel( ); % ylabel( ); % title(sprintf( q=%.g r=%.g p=%.g,q,r,p)); % box off; % AxisSet; % colorbar; if pf~=, figure; FigureSet(); colormap(jet()); else figure(); h = imagesc(a(:,k)); set(gca, YDir, Normal ); xlabel( ); ylabel( Lag (k) ); box off; colorbar; AxisSet; % Process Return Arguments if nargout==, clear( A, yh ); J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Example : Chirp KF AR Model versus Spectrogram Example : Chirp Nonparametric Spectrogram Compare the time-frequency resolution of the PSD estimated from the autoregressive parameters estimated with a Kalman filter to a nonparametric spectrogram... 7 8........ 7 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Example : Chirp KF Parametric Spectrogram l= q=e-7 r=.ρ =f d = 7 8 Example : Chirp KF Parametric Spectrogram l= q=e- r=.ρ =f d = 7 8................ 7 8 7 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Example : Chirp KF Parametric Spectrogram l= q=e- r=.ρ =f d = 7 8 Example : Chirp KF Parametric Spectrogram l= q=. r=.ρ =f d = 7 8................ 7 8 7 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

Example : Chirp KF Parametric Spectrogram l= q=. r=.ρ =f d = 7 8 Example : Chirp KF Parametric Spectrogram l= q=e+ r=.ρ =f d = 7 8................ 7 8 7 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. clear; close all; Example : MATLAB Code FigureSet(, LTX ); print(sprintf( KFChirpQ%d,round(log(q(c)))), -depsc ); % User-Specified Parameters fs = ; % Sample rate (Hz) N = ; % Number of observations from the process Np = ; % Number of samples to throw away to account for transient k = :N/; t = (k-.)/fs; yc = chirp(t,.,t(end),.); y = [yc fliplr(yc) yc fliplr(yc)] ; % Spectrogram NonparametricSpectrogram(y,fs,); FigureSet(, LTX ); print( KFChirpSpectrogram, -depsc ); % Kalman Filter Estimates q = [e-7 e- e- e- e- e]; for c=:length(q), figure; clf; KalmanFilterAutoregressive(y,fs,,[q(c) var(y) ]); close; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

Example : ICP KF AR Model versus Spectrogram Example : ICP Nonparametric Spectrogram Compare the time-frequency resolution of the PSD estimated from the autoregressive parameters estimated with a Kalman filter to a nonparametric spectrogram for estimating the PSD of an intracranial pressure signal. 8 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 Example : ICP Non-zero Mean KF Parametric Spectrogram Example : ICP KF Parametric Spectrogram l= q=.8 r=.8ρ =f d = l= q=.8 r=.8ρ =f d = 8 8 8 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7

Example : ICP KF Parametric Spectrogram Example : ICP KF Parametric Spectrogram l= q=.8 r=.8ρ =f d = l= q=.8 r=.8ρ =f d = 8 8 8 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 Example : ICP KF Parametric Spectrogram l= q=.8e+ r=.8ρ =f d = 8 8 Example : MATLAB Code clear; close all; % Load the Data load( ICP.mat ); y = decimate(icp,); fs = fs/; % Spectrogram NonparametricSpectrogram(y,fs,); FigureSet(, LTX ); print( KFICPSpectrogram, -depsc ); % Parametric KF Spectrogram with Mean Intact KalmanFilterAutoregressive(y,fs,,[e-*var(y) var(y) ]); close; FigureSet(, LTX ); print( KFICPNonzeroMean, -depsc ); % Kalman Filter Estimates y = y - mean(y); q = [e- e- e- e]; for c=:length(q), J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7

figure; clf; KalmanFilterAutoregressive(y,fs,,[q(c)*var(y) var(y) ]); close; FigureSet(, LTX ); print(sprintf( KFICPQ%d,round(log(q(c)))), -depsc ); AR Estimation Observations Usually the user picks R n = ri, Q n = qi, and Pi = ρi The estimates are primarily determined by the ratio λ = q/r The parameter ρ controls how quickly the model converges from the initial parameter values λ and l control the bias-variance tradeoff The estimate is surprisingly robust to these parameters J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 77 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 78 Predicted and Filtered Estimates There are many equivalent forms of the Kalman filter So far we have only derived the so-called covariance form This creates one step predicted estimates of the state ˆx n n May be useful to have filtered estimates as well, ˆx n n Can also generalize so that we can obtain ˆx n k for any k For now will only focus on filtered estimates Measurement and Time Updates Suppose we have computed the predicted estimate ˆx n n and we wish to obtain the filtered estimate: n ˆx n n = x n, e k R = k= ( n e,k e k x n, e k R k= e,k e k = ˆx n n + x n, e k R e,ne n = ˆx n n + K f,n e n ) + x n, e k Re,ne n where K f,n x n, e k R e,n K f,n is called the filtered Kalman gain But we still need to solve for x n, e k J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 79 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

Thus Solving for the Filtered Kalman Gain x n, e k = x n,h n x n n + v n = x n, x n n Hn + x n, v n = ˆx n n + x n n, x n n Hn = x n n H n = P n n H n K f,n x n, e k R e,n = P n n H nr e,n Filtered State Error Covariance We don t need it, but it would also be nice to know the filtered state error covariance P n n x n n = x n ˆx n n = x n n, x n ˆx n n = x n n, x n = x n ˆx n n, x n = x n ˆx n n K f,n e n, x n = x n ˆx n n, x n K f,n e n, x n = x n n, x n K f,n H n x n n + v n, x n = P n n K f,n H n x n n, x n = P n n K f,n H n P n n = P n n P n n H nr e,nh n P n n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Time Updates: State Estimate Similarly, we can obtain ˆx n+ n and P n+ n from the filtered values ˆx n n and P n n x n+ n = F n x n + G n u n ˆx n+ n = F n ˆx n n + G n û n n n û n n = u n, e k R k= e,k e k = u n, e n Re,ne n u n, e n = u n,h n x n + v n = u n, v n = S n û n n = S n R e,ne n ˆx n+ n = F n ˆx n n + G n S n R e,ne n P n+ n = x n+ n Time Updates: Covariance = x n+ ˆx n+ n = (F n x n + G n u n ) (F n ˆx n n + G n S n R e,ne n ) = F n x n n + G n u n G n S n R e,ne n F n ˆx n n,g n u n = F n (ˆx n n + K f,n e n ),G n u n = F n K f,n S ng n P n+ n = F n P n n Fn + F n K f,n SnG n + G n S n Kf,nF n +G n u n S n Re,ne n G n = F n P n n Fn + F n K f,n SnG n + G n S n Kf,nF n +G n (Q n S n Re,nS n S n Re,nS n + S n Re,nS n)g n = F n P n n Fn + F n K f,n SnG n + G n S n Kf,nF n +G n (Q n S n R e,ns n)g n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

Time Updates: Covariance Continued Time and Measurement Updates P n+ n = F n P n n F n + F n K f,n S ng n + G n S n K f,nf n +G n (Q n S n R e,ns n)g n Initialization Recursions ˆx = P =Π When S n =, this simplifies considerably to P n+ n = F n P n n F n + G n Q n G n e n = y n H n ˆx n n R e,n = H n P n n H n + R n K f,n = P n n H nr e,n ˆx n n = ˆx n n + K f,n e n ˆx n+ n = F n ˆx n n + G n S n R e,ne n P n n = P n n P n n H nr e,nh n P n n P n+ n = F n P n n F n + F n K f,n S ng n + G n S n K f,nf n + G n (Q n S n R e,ns n)g n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Time and Measurement Updates =ˆx m.u. ˆx t.u. ˆx m.u. ˆx t.u. ˆx m.u. ˆx t.u. ˆx... m.u. t.u. m.u. t.u. m.u. t.u. Π = P P P P P P P... Separating the measurement and time updates permits us to obtain two estimates of the state aprioriestimate of ˆx n n before the nth measurement has been obtained a posteriori estimate ˆx n n after the measurement is taken If a measurement is lost (drop out) can just use the predicted value J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 87