x n C l 1 u n C m 1 y n C p 1 v n C p 1

Size: px
Start display at page:

Download "x n C l 1 u n C m 1 y n C p 1 v n C p 1"

Transcription

1 Kalman Filter Derivation Examples Time and Measurement Updates x u n v n, State Space Model Review x n+ = F n x n + G n u n x u k v k y n = H n x n + v n Π = Q n δ nk S n δ nk Snδ nk R n δ nk F n C l l G n C l m H n C p l x n C l u n C m y n C p v n C p u n, x k = v n, x k = for n k u n, y k = v n, y k = for n>k u n, y k = S n v n, y k = R n for n = k u n, x = iff u n, x n = for n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Key Linear Estimation Properties Linearity of Linear MMSE Estimators for any two matrices A and A The Orthogonality Condition (A x + A x )=A ˆx + A ˆx ˆx y,y = ˆx y + ˆx y if and only if y y (R y y =) State Covariance Recursion Recall that Π n+ x n+, x n+ = F n x n + G n u n,f n x n + G n u n = F n x n, x n Fn + G n u n, u n G n = F n Π n Fn + G n Q n G n where the initial value Π is usually specified by the user. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

2 Innovations Let us begin by just using the linear process model and the definition of the innovations y n = H n x n + v n The estimator notation used here is e n y n ŷ n n ŷ n k = the linear MMSE estimator of y n given {y,...,y k } Our goal is to come up with a recursive formulation for linear estimators of x n for n =,,... Our motivation for using the innovations is that they have a diagonal covariance matrix Recall that calculating the innovations is equivalent to applying a whitening filter This has become a theme for this class Whiten before processing Simplifies the subsequent step of estimation Recursion for Innovations e n y n ŷ n n y n = H n x n + v n ŷ n n = H n ˆx n n + ˆv n n = H n ˆx n n e n = y n H n ˆx n n = H n x n + v n H n ˆx n n = H n (x n ˆx n n )+v n = H n x n n + v n where the new notation is defined as x n n x n ˆx n n ỹ n n y n ŷ n n = e n Note that our goal of finding an expression for the innovations reduces to finding a way to estimate the one-step predictions of the state vector, ˆx n n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. State Estimation for Innovations ˆx n+ n = x n+, col{y,...,y n } col{y,...,y n } col{y,...,y n } = x n+, col{e,...,e n } col{e,...,e n } col{e,...,e n } n = x n+, e k R k= where we have defined e,k e k R e,k e k =E[e k e k] The first equality is just the solution to the normal equations The second equality follows from the definition of the innovations and equivalence of the linear spaces spanned by the observations and innovations L{y,...,y n } = L{e,...,e n } The third equality follows from the orthogonality property State Estimation for Innovations n ˆx n+ n = x n+, e k R k= e,k e k This expression assumes the error covariance matrix is invertible (positive definite) R e,k > This is a nondegeneracy assumption on the process {y,...,y n } It essentially means that no variable y n can be exactly estimated by a linear combination of earlier observations This does not require that R n = v n > It s quite possible that R e,k > even if R n = However, if R n >, we know for certain that R e,k > J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

3 Circular Reasoning? In order to calculate the innovations e n y n ŷ n n = y n H n ˆx n n we need to estimate the one step predictions of the state. Yet in order to obtain these estimates, n ˆx n+ n = x n+, e k R k= e,k e k we need the innovations! Luckily, we only need the previous innovations to estimate the one step state predictions Suggests a recursive solution ˆx n+ n = = Recursive State Estimation n x n+, e k R k= ( n e,k e k x n+, e k R k= e,k e k ) + x n+, e n Re,ne n = ˆx n+ n + x n+, e n Re,n ( ) yn H n ˆx n n This is almost what we need However, there are still some missing pieces Can we express ˆx n+ n in terms of what is known at time n: ˆx n n and e n? How do we obtain an expression for the error covariance matrix in terms of the model parameters? The only way to make any further progress is to use the state update equation that relates x n+ to x n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. State Prediction x n+ = F n x n + G n u n ˆx n+ n = F n ˆx n n + G n û n n = F n ˆx n n Now given ˆx n+ n = ˆx n+ n + x n+, e n Re,n ( ) yn H n ˆx n n we can set up a proper recursion that we initialize with e = y where we have defined e n = y n H n ˆx n n ˆx n+ n = F n ˆx n n + K p,n e n K p,n x n+, e n R e,n The subscript p indicates that this gain is used to update a predicted estimator of the state, ˆx n+ n This is one of the Kalman gains that we will use Defining the State Error Covariance x n n x n ˆx n n P n n x n n =E [ ] x n n x n n We still need to solve for K p,n and R e,n in terms of the known state space model parameters To do so it will be useful to introduce yet another covariance matrix P n n is the state error covariance matrix It would be great to know this anyway Gives us a means of knowing how accurate our estimates are If {x, u,...,u n, v,...,v n } are jointly Gaussian (central limit theorem) we could construct exact confidence intervals on our estimates! J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

4 Solving for the Innovations Covariance e n = y n H n ˆx n n = H n x n + v n H n ˆx n n = H n (x n ˆx n n )+v n = H n x n n + v n Then it follows immediately that R e,n = e n, e n = H n x n n + v n,h n x n n + v n = H n x n n, x n n Hn + v n, v n = H n P n n Hn + R n Thus we have replaced the problem of needing an expression for R e,n with the problem of needing an expression for P n n Solving for the Kalman Prediction Gain K p,n x n+, e n R e,n We have an expression for R e,n, but we still need an expression for the cross-covariance in terms of the state space model parameters x n+, e n = F n x n + G n u n, e n = F n x n, e n + G n u n, e n x n, e n = x n,h n x n n + v n = x n, x n n Hn + x n, v n = x n, x n n Hn = ˆx n n + x n n, x n n H n = ˆx n n, x n n H n + x n n, x n n H n = x n n, x n n H n = P n n H n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Solving for the Kalman Prediction Gain Continued K p,n x n+, e n Re,n x n+, e n = F n x n, e n + G n u n, e n x n, e n = P n n Hn We still need to solve for the second term Thus, u n, e n = u n,h n x n n + v n = u n, x n n Hn + u n, v n = u n, x n ˆx n n Hn + u n, v n = u n, v n = S n K p,n x n+, e n R e,n = ( F n P n n H n + G n S n ) R e,n The only thing we still need is an expression for is P n n Solving for the State Error Covariance Let us find a recursive expression for the residuals x n n that we can then use to solve for P n n x n+ n = x n+ ˆx n+ n =(F n x n + G n u n ) ( ) F n ˆx n n + K p,n e n ( ) ( ) = F n xn ˆx n n + Gn u n K p,n Hn x n n + v n =(F n K p,n H n ) x n n + [ ] [ ] u G n K n p,n v n = F p,n x n n + [ ] [ ] u G n K n p,n v n Then P n+ n x n+ n, x n+ n = F p,n P n n Fp,n + [ ] [ ][ ] Q G n K n S n G n p,n Sn R n Kp,n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

5 Solving for the State Error Covariance Continued P n+ n x n+ n, x n+ n = F p,n P n n Fp,n + [ ] [ ][ Q G n K n S n G n p,n Sn R n K p,n = F p,n P n n Fp,n + [ ] [ Q G n K n G n S n Kp,n p,n SnG n R n Kp,n = (F n K p,n H n )P n n (F n K p,n H n ) +G n Q n G n G n S n K p,n K p,n S ng n + K p,n R n K p,n = F n P n n F n K p,n H n P n n F n F n P n n H nk p,n + K p,n H n P n n H nk p,n +G n Q n G n G n S n K p,n K p,n S ng n + K p,n R n K p,n ] ] Solving for the State Error Covariance Continued P n+ n = F n P n n Fn + G n Q n G ( n + K p,n Hn P n n Hn ) + R n K p,n ( K p,n Hn P n n Fn + SnG ) n ( F n P n n Hn ) + G n S n K p,n R e,n = H n P n n H n + R n K p,n = ( F n P n n H n + G n S n ) R e,n P n+ n = F n P n n F n + G n Q n G n + K p,n R e,n K p,n K p,n R e,n K p,n K p,n R e,n K p,n = F n P n n F n + G n Q n G n K p,n R e,n K p,n The only remaining question is how to start the recursion P = x ˆx = x =Π J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Covariance form of the Kalman Filter User provides the state space model parameters. Initialization Recursions ˆx = P =Π e n = y n H n ˆx n n R e,n = H n P n n H n + R n K p,n = ( F n P n n H n + G n S n ) R e,n ˆx n+ n = F n ˆx n n + K p,n e n Kalman Filter Features Requires only O(l ) operations per update if m l and p l To obtain ˆx N N requires a total of O(Nl ) operations two orders of magnitude less than solving the normal equations directly! Gives the error covariance matrices Easily obtain standard deviation of the error Approximate confidence intervals Estimates the state at all times {,...,N} Includes a whitening filter as part of the computations P n+ n = F n P n n F n + G n Q n G n K p,n R e,n K p,n = F n P n n F n + G n Q n G n K p,n (H n P n n F n + S ng n) J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

6 Example : Accelerometer Suppose we wish to have a random walk model of an the position of an object and we have recordings from an accelerometer to estimate it s position. Suppose the accelerometer is sampled at a rate of f s = Hz. Our statistical model of the process is then ṗ(t) =u(t) y(t) = d p(t) dt + v(t) If we approximate the derivative in discrete time with dx(t) x(n) x(n ) dt t=nts T s d x(t) x(n) x(n ) + x(n ) dt t=nts Ts Example : Discrete-Time State Space Model Then the discrete-time state space model of the process becomes p(n +) p(n) p(n) = p(n ) + u(n) p(n ) p(n ) y(n) = [ ] p(n) p(n ) Ts + v(n) p(n ) Which is in our standard form. x n+ = F n x n + G n u n y n = H n x n + v n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Measured Acceleration (m/s ) Position (m) Example : Example Plots f:. q:. r: Position (m)... Example : Example Plots f:. q:. r:. Actual Position Estimated Position Naive Estimate J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

7 Position (m).... Example : Example Plots f:. q:. r:. Actual Position Estimated Position Naive Estimate Position (m)... Example : Example Plots f:. q:. r:. Actual Position Estimated Position Naive Estimate J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Example : Example Plots f:.99 q:. r:. Example : Example Plots f:.99 q:. r:.. Actual Position Estimated Position Naive Estimate.. Actual Position Estimated Position Naive Estimate Position (m).. Position (m) J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

8 . Example : Example Plots f:.99 q:. r:. Actual Position Estimated Position Naive Estimate Example : MATLAB Code clear all; close all; % Parameters fs = ; % Sample rate (Hz) T = ; % Duration of recording (seconds) pnv =.; % Process noise variance (standard deviation of step size per second) mnv = ; % Measurement noise variance (continuous time) fds = [.99.]; % Forgetting factors Position (m) % Preprocessing Ts = /fs; % Sampling interval (s) N = round(fs*t); % No. samples for c=:length(fds), fd = fds(c); % State Model Parameters F = [fd ; ; ]; G = [;;]; H = [,-,]/Ts.^; Q = Ts*pnv; R = mnv; S = ; for c=:, J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. % Process Synthesis (simulation) x = zeros(,); p = zeros(n,); % Actual position y = zeros(n,); % Measured acceleration mn = sqrt( mnv)*randn(n,); % Measurement noise pn = sqrt(ts*pnv)*randn(n,); % Process noise for n = :N, p(n) = x(); % Actual position y(n) = H*x + mn(n); % Measured acceleration x = F*x + G*pn(n); % Update the state vector % Estimate the Position with the KF xh = zeros(,); ph = zeros(n,); % Estimated position phv = zeros(n,); % Estimated error variance P = e-*eye(); phv() = P(,); for n=:n-, % Calculations for time n yh(n)= H*xh; % Predicted value of the signal e(n) = y(n) - yh(n); % Calculate the innovation at time n Re(n) = R + H*P*H ; % Update innovation covariance Kp = (F*P*H + G*S)*inv(Re(n)); % Kalman filter (prediction) coefficients % Calculations for time n+ xh = F*xh + Kp*e(n); % Predicted state at time n+ n+ P = F*P*F + G*Q*G - Kp*Re(n)*Kp ; % Calculate state error covariance matrix if eig(p)<, warning( Error covariance matrix is negative definite. ); % Save Info ph (n+) = xh(); phv(n+) = P(,); ph = Ts^*cumsum(cumsum(y)); % Plot the Position and Acceleration s figure; FigureSet(, LTX ); k = (:N). ; t = (k-.)/fs; z = ones(n,); phs = phv.^(/); ucb = norminv(.97,ph,phs); % Upper confidence band lcb = norminv(.,ph,phs); % Lower confidence band h = patch([t;flipud(t)],[ucb;flipud(lcb)], k ); set(h, LineStyle, None ); set(h, FaceColor,[.9.9.]); hold on; h = plot(t,p,z, g,t,ph,z, b,t,ph,z, r ); set(h, LineWidth,.8); hold off; view(,9); box off; xlim([ T]); ylim([min([lcb;p;ph]) max([ucb;p;ph])]); AxisLines; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

9 ylabel( Position (m) ); xlabel( ); title(sprintf( $f$:%.f \\hspace{em} $q$:%.f \\hspace{em} $r$:%.f,fd,mnv,pnv)); legend(h, Actual Position, Estimated Position, Naive Estimate ); print(sprintf( AccelerometerEstimate%d-%d,c,round(fd*)), -depsc ); print( Accelerometers, -depsc ); % Plot the Position and Acceleration s figure; FigureSet(, LTX ); k = :N; t = (k-.)/fs; subplot(,,); h = plot(t,p, g ); set(h, LineWidth,.); box off; ylabel( Position (m) ); title(sprintf( $f$:%.f \\hspace{em} $q$:%.f \\hspace{em} $r$:%.f,fd,mnv,pnv)); xlim([ T]); ylim([min(p) max(p)]); AxisLines; subplot(,,); k = :N-; h = plot(t,y, b ); set(h, LineWidth,.); box off; xlabel( ); ylabel( Measured Acceleration (m/s$^$) ); xlim([ T]); ylim([min(y) max(y)]); AxisLines; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Example : AR Modeling of an ARMA Process Use the Kalman filter to build an AR model of the ARMA process used in examples throughout ECE 8/8. Use a random walk state space model for the parameter vector with various measurement/process noise ratios. Q n = qi R n = ri λ = q/r. Example : Pole-Zero Map... J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

10 Example : Process Output Example : Process PSD Length: PSD x True PSD PSD Sample Index Frequency (cycles/sample) J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Example : Nonparametric Spectrogram Example : Estimated PSD l= q=e-7 r=.e+ ρ =f d = J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

11 Example : Estimated PSD l= q=e- r=.e+ ρ =f d = 7 8 Example : Estimated PSD l= q=. r=.e+ ρ =f d = J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Example : Estimated PSD l= q=.r=.e+ ρ =f d = 7 8 Example : Estimated PSD l= q=.r=.e+ ρ =f d = J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

12 Example : Estimated PSD Example : MATLAB Code l= q=e+ r=.e+ ρ =f d = 7 8 clear; close all; % User-Specified Parameters fs = ; % Sample rate (Hz).. N = ; % Number of observations from the process Np = ; % Number of samples to throw away to account for transient.... b = poly([-.8,.97*exp(j *pi/),.97*exp(-j *pi/),....97*exp(j *pi/),.97*exp(-j *pi/)]); % Numerator coefficients a = poly([.8,.9*exp(j*.*pi/),.9*exp(-j*.*pi/),....9*exp(j*.*pi/),.9*exp(-j*.*pi/)]); % Denominator coefficients b = b*sum(a)/sum(b); % Scale DC gain to l = length(a)-; % Order of the syste, m = l; % Dimension of the state/process noise p = ; % Dimension of the observations (y) % Plot the Pole-Zero Map figure FigureSet(, LTX ); h = Circle; z = roots(b); p = roots(a); hold on; h = plot(real(z),imag(z), bo,real(p),imag(p), rx ); hold off; axis square; xlim([-..]); J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. ylim([-..]); AxisLines; box off; print -depsc ProcessPZMap; % Plot n = ; w = randn(np+n,); x = filter(b,a,w); % System with known PSD nx = length(x); y = x(nx-n+:nx); % Eliminate start-up transient (make stationary) y = y; % Make unit variance figure; FigureSet(, LTX ); k = :N-; h = stem(k,y); set(h, Marker,. ); set(h, LineWidth,.); box off; ylabel( ); xlabel( Sample Index ); title(sprintf( Length:%d,n)); xlim([ n]); ylim([min(y) max(y)]); print -depsc ProcessOutput; % Plot True PSD figure; FigureSet(, LTX ); [R,w] = freqz(b,a,^); R = abs(r).^; f = w/(*pi); subplot(,,); h = plot(f,r, r ); set(h, LineWidth,.); box off; ylabel( PSD ); title( True PSD ); xlim([.]); ylim([ max(r)*.]); subplot(,,); h = semilogy(f,r, r ); set(h, LineWidth,.); box off; ylabel( PSD ); xlim([.]); ylim([.*min(r) max(r)*]); xlabel( Frequency (cycles/sample) ); print -depsc ProcessPSD; % Spectrogram NonparametricSpectrogram(y,fs,,[],^9,); FigureSet(, LTX ); print( KFARSpectrogram, -depsc ); % Kalman Filter Estimates q = [e-7 e- e- e- e- e]; for c=:length(q), figure; clf; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

13 KalmanFilterAutoregressive(y,fs,,[q(c) var(y) ]); close; FigureSet(, LTX ); print(sprintf( KFARQ%d,round(log(q(c)))), -depsc ); Example : MATLAB Code Continued function [A,yh] = KalmanFilterAutoregressive(y,fsa,na,vra,fda,sfa,pfa); %KalmanFilterAutoregressive: Adaptive autoregressive model % % [A] = KalmanFilterAutoregressive(y,fs,n,vr,fd,sf,pf); % % y Input signal. % fs Sample rate (Hz). Default = Hz. % n Model order. Default =. % vr Vector of process, measurement, and initial state variances. % Default = [.*var(y) var(y) ]. % fd State transition matrix diagonal. Default =.. % sf Smoothing flag: =none (default), =smooth. % pf Plot flag: =none (default), =screen. % % A Matrix containing the parameters versus time. % yh Predicted estimates of the output. % % Uses a random walk state update model with a constant-diagonal % covariance matrices for the initial state, measurement noise, % and process noise. This implementation uses the casual version % of the Kalman filter. It is not robust or computationally % efficient. The input signal must be scalar (a vector). % % When smoothing is applied, uses the Bryson-Frazier formulas as % given in Kailath et al. % % Example: Generate the parametric spectrogram of an intracranial % pressure signal using a Blackman-Harris window that is s in % duration. % % load ICP.mat; % icpd = decimate(icp,); % KalmanFilterAutoregressive(icpd,fs/); % J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. % T. Kailath, A. H. Sayed, B. Hassibi, "Linear Estimation," % Prentice Hall,. % % Version. JM % % See also Lowpass, ParametricSpectrogram, and KernelFilter. % Error Checking if nargin<, help KalmanFilterAutoregressive; return; % Author-Specified Parameters Nf = ^9; % Zero padding to use for PSD estimation Nt = ; % No. times to evaluate the parametric spectrogram % Process Function Arguments fs = ; if exist( fsa, var ) & ~isempty(fsa), fs = fsa; l = ; % Number of parameters (the letter l, not the number ) if exist( na, var ) && ~isempty(na), l = na; q =.*var(y); % Process noise variance r = var(y); % Measurement noise variance p = e-; % Initial state variance if exist( vra, var ) && ~isempty(vra), q = vra(); r = vra(); p = vra(); fd = ; % State transition matrix diagonal if exist( fda, var ) && ~isempty(fda), fd = fda; sf = ; % Smoothing flag. Default = no smoothing. if exist( sfa ) & ~isempty(sfa), sf = sfa; pf = ; % Default - no plotting if nargout==, % Plot if no output arguments pf = ; if exist( pfa ) & ~isempty(pfa), pf = pfa; % Preprocessing y = y(:); ny = length(y); % No. samples m = l; % Dimension of the state/process noise = dimension of the state p = ; % Dimension of the observations (y) F = fd; % State transition matrix (F) G = ; % Process noise matrix J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

14 % Variable Initialization Q = q*eye(m,m); % Covariance matrix for state/process noise R = r*eye(p,p); % Covariance matrix for observation noise S = zeros(m,p); % Cross-covariance matrix between measurement and observation noise P = p*eye(l,l); % Initial state (x()) covariance matrix H = zeros(p,l); % Initial value of H(i) at i= xh = zeros(l,); % Initial value of estimated state at time i= P = P; X = zeros(l,ny); % Estimates of the state at all times K = zeros(l,ny); % Kalman gains Px = zeros(l,l,ny); % State error covariances yh = zeros(ny,); % Predicted outputs e = zeros(ny,); % Innovations % Forward Recursions for n=:ny, % Calculations for time n id = n - (:l); % Indices of observed signal to stuff in H(n) iv = find(id>); % Indices that are non-negative H = zeros(,l); H(iv) = y(id(iv)); % Update output matrix yh(n)= H*xh; % Predicted value of the signal e(n) = y(n) - yh(n); % Calculate the innovation at time n Re(n) = R + H*P*H ; % Update innovation covariance Kp = (F*P*H + G*S)*inv(Re(n)); % Kalman filter (prediction) coefficients % Calculations for time n+ xh = F*xh + Kp*e(n); % Predicted state at time n+ n+ P = F*P*F + G*Q*G - Kp*Re(n)*Kp ; % Calculate state error covariance matrix if eig(p)<, warning( Error covariance matrix is negative definite. ); % Save Info X(:,n) = xh; % Store estimate at time n+ K(:,n) = Kp; Px(:,:,n) = P; % Store the state error covariance % Backward Recursions (Bryson-Frazier Formulas) if sf==, Ps = zeros(l,l); % Smoothed state error covariance matrix L = zeros(l,l); la = zeros(l,); for n=ny:-:, % Calculations for time i id = (n-) - (:l-); % Indices of the observed signal to stuff in H(n) iv = find(id>); H = zeros(,l); H(iv) = y(id(iv)); % Calculate the output matrix H Pp = Px(:,:,n); % Recall the predicted state error covariance Fp = F - K(:,n)*H; xh = X(:,n) + Pp*Fp *la; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. la = Fp *la + H *inv(re(n))*e(n); % Save Info X(:,n) = xh; % Store estimate at time n+ yh(n) = H*xh; % Store predicted estimate at time n % Post-processing A = X; % Plotting if pf>=, if pf~=, figure; FigureSet(); colormap(jet()); else figure(); Pf = zeros(l,nt); % Frequencies of the poles Ry = zeros(nf,nt); % Memory allocation for spectral estimate k = round(linspace(,ny,nt)); % Sample indices to evalute spectral estimate at for c=:nt, Ry(:,c) = abs(freqz(,[;-a(:,k(c))],nf,fs)); rts = roots([;-a(:,k(c))]); % Obtain the roots of the polynomial Pf(:,c) = sort(angle(rts)*fs/(*pi)); % Store the sorted frequencies t = (k-.)/fs; % Times of estimate tl = ; % Default label of time axis if t(end)>, t = t/; tl = ; f = (:Nf-)*fs/(*Nf); % Frequencies of estimate % Spectrogram ha = axes( Position,[...8.9]); s = reshape(abs(ry),nf*nt,); p = [ prctile(s,97)]; imagesc(t,f,ry,p); xlim([ t(end)]); ylim([ f(end)]); set(ha, YAxisLocation, Right ); set(ha, XAxisLocation, Top ); set(ha, YDir, normal ); AxisSet; title(sprintf( $\\ell$=%d $q$=%.g $r$=%.g $\\rho_$=%.g $f_d$=%.g,l,q,r,p,fd)); % Colorbar ha = axes( Position,[....9]); colorbar(ha); set(ha, Box, Off ) set(ha, YTick,[]); % Power Spectral Density ha = axes( Position,[.8...9]); psd = mean((ry.^). ); J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

15 h = plot(psd,f, r ); %,[Smin Smax]); set(h, LineWidth,.8); ylim([ f(end)]); xlim([.*max(psd)]); ylabel( ); set(ha, XTick,[]); % ha = axes( Position,[...8.]); h = plot(t,y(k)); set(h, LineWidth,.8); ymin = min(y); ymax = max(y); yrng = ymax-ymin; ymin = ymin -.*yrng; ymax = ymax +.*yrng; xlim([ t(end)]); ylim([ymin ymax]); xlabel(tl); ylabel( ); axes(ha); % Spectrogram with Poles % rg = prctile(reshape(ry,prod(size(ry)),),[,98]); % h = imagesc(t,f,ry,rg); % % hold on; % % h = plot(t,pf, k. ); % % set(h, MarkerSize,); % % h = plot(t,pf, w. ); % % set(h, MarkerSize,); % % hold off; % set(gca, YDir, Normal ); % xlabel( ); % ylabel( ); % title(sprintf( q=%.g r=%.g p=%.g,q,r,p)); % box off; % AxisSet; % colorbar; if pf~=, figure; FigureSet(); colormap(jet()); else figure(); h = imagesc(a(:,k)); set(gca, YDir, Normal ); xlabel( ); ylabel( Lag (k) ); box off; colorbar; AxisSet; % Process Return Arguments if nargout==, clear( A, yh ); J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Example : Chirp KF AR Model versus Spectrogram Example : Chirp Nonparametric Spectrogram Compare the time-frequency resolution of the PSD estimated from the autoregressive parameters estimated with a Kalman filter to a nonparametric spectrogram J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

16 Example : Chirp KF Parametric Spectrogram l= q=e-7 r=.ρ =f d = 7 8 Example : Chirp KF Parametric Spectrogram l= q=e- r=.ρ =f d = J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. Example : Chirp KF Parametric Spectrogram l= q=e- r=.ρ =f d = 7 8 Example : Chirp KF Parametric Spectrogram l= q=. r=.ρ =f d = J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver..

17 Example : Chirp KF Parametric Spectrogram l= q=. r=.ρ =f d = 7 8 Example : Chirp KF Parametric Spectrogram l= q=e+ r=.ρ =f d = J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. clear; close all; Example : MATLAB Code FigureSet(, LTX ); print(sprintf( KFChirpQ%d,round(log(q(c)))), -depsc ); % User-Specified Parameters fs = ; % Sample rate (Hz) N = ; % Number of observations from the process Np = ; % Number of samples to throw away to account for transient k = :N/; t = (k-.)/fs; yc = chirp(t,.,t(end),.); y = [yc fliplr(yc) yc fliplr(yc)] ; % Spectrogram NonparametricSpectrogram(y,fs,); FigureSet(, LTX ); print( KFChirpSpectrogram, -depsc ); % Kalman Filter Estimates q = [e-7 e- e- e- e- e]; for c=:length(q), figure; clf; KalmanFilterAutoregressive(y,fs,,[q(c) var(y) ]); close; J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

18 Example : ICP KF AR Model versus Spectrogram Example : ICP Nonparametric Spectrogram Compare the time-frequency resolution of the PSD estimated from the autoregressive parameters estimated with a Kalman filter to a nonparametric spectrogram for estimating the PSD of an intracranial pressure signal. 8 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 9 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 Example : ICP Non-zero Mean KF Parametric Spectrogram Example : ICP KF Parametric Spectrogram l= q=.8 r=.8ρ =f d = l= q=.8 r=.8ρ =f d = J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7

19 Example : ICP KF Parametric Spectrogram Example : ICP KF Parametric Spectrogram l= q=.8 r=.8ρ =f d = l= q=.8 r=.8ρ =f d = J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 Example : ICP KF Parametric Spectrogram l= q=.8e+ r=.8ρ =f d = 8 8 Example : MATLAB Code clear; close all; % Load the Data load( ICP.mat ); y = decimate(icp,); fs = fs/; % Spectrogram NonparametricSpectrogram(y,fs,); FigureSet(, LTX ); print( KFICPSpectrogram, -depsc ); % Parametric KF Spectrogram with Mean Intact KalmanFilterAutoregressive(y,fs,,[e-*var(y) var(y) ]); close; FigureSet(, LTX ); print( KFICPNonzeroMean, -depsc ); % Kalman Filter Estimates y = y - mean(y); q = [e- e- e- e]; for c=:length(q), J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 7

20 figure; clf; KalmanFilterAutoregressive(y,fs,,[q(c)*var(y) var(y) ]); close; FigureSet(, LTX ); print(sprintf( KFICPQ%d,round(log(q(c)))), -depsc ); AR Estimation Observations Usually the user picks R n = ri, Q n = qi, and Pi = ρi The estimates are primarily determined by the ratio λ = q/r The parameter ρ controls how quickly the model converges from the initial parameter values λ and l control the bias-variance tradeoff The estimate is surprisingly robust to these parameters J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 77 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 78 Predicted and Filtered Estimates There are many equivalent forms of the Kalman filter So far we have only derived the so-called covariance form This creates one step predicted estimates of the state ˆx n n May be useful to have filtered estimates as well, ˆx n n Can also generalize so that we can obtain ˆx n k for any k For now will only focus on filtered estimates Measurement and Time Updates Suppose we have computed the predicted estimate ˆx n n and we wish to obtain the filtered estimate: n ˆx n n = x n, e k R = k= ( n e,k e k x n, e k R k= e,k e k = ˆx n n + x n, e k R e,ne n = ˆx n n + K f,n e n ) + x n, e k Re,ne n where K f,n x n, e k R e,n K f,n is called the filtered Kalman gain But we still need to solve for x n, e k J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 79 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

21 Thus Solving for the Filtered Kalman Gain x n, e k = x n,h n x n n + v n = x n, x n n Hn + x n, v n = ˆx n n + x n n, x n n Hn = x n n H n = P n n H n K f,n x n, e k R e,n = P n n H nr e,n Filtered State Error Covariance We don t need it, but it would also be nice to know the filtered state error covariance P n n x n n = x n ˆx n n = x n n, x n ˆx n n = x n n, x n = x n ˆx n n, x n = x n ˆx n n K f,n e n, x n = x n ˆx n n, x n K f,n e n, x n = x n n, x n K f,n H n x n n + v n, x n = P n n K f,n H n x n n, x n = P n n K f,n H n P n n = P n n P n n H nr e,nh n P n n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Time Updates: State Estimate Similarly, we can obtain ˆx n+ n and P n+ n from the filtered values ˆx n n and P n n x n+ n = F n x n + G n u n ˆx n+ n = F n ˆx n n + G n û n n n û n n = u n, e k R k= e,k e k = u n, e n Re,ne n u n, e n = u n,h n x n + v n = u n, v n = S n û n n = S n R e,ne n ˆx n+ n = F n ˆx n n + G n S n R e,ne n P n+ n = x n+ n Time Updates: Covariance = x n+ ˆx n+ n = (F n x n + G n u n ) (F n ˆx n n + G n S n R e,ne n ) = F n x n n + G n u n G n S n R e,ne n F n ˆx n n,g n u n = F n (ˆx n n + K f,n e n ),G n u n = F n K f,n S ng n P n+ n = F n P n n Fn + F n K f,n SnG n + G n S n Kf,nF n +G n u n S n Re,ne n G n = F n P n n Fn + F n K f,n SnG n + G n S n Kf,nF n +G n (Q n S n Re,nS n S n Re,nS n + S n Re,nS n)g n = F n P n n Fn + F n K f,n SnG n + G n S n Kf,nF n +G n (Q n S n R e,ns n)g n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8

22 Time Updates: Covariance Continued Time and Measurement Updates P n+ n = F n P n n F n + F n K f,n S ng n + G n S n K f,nf n +G n (Q n S n R e,ns n)g n Initialization Recursions ˆx = P =Π When S n =, this simplifies considerably to P n+ n = F n P n n F n + G n Q n G n e n = y n H n ˆx n n R e,n = H n P n n H n + R n K f,n = P n n H nr e,n ˆx n n = ˆx n n + K f,n e n ˆx n+ n = F n ˆx n n + G n S n R e,ne n P n n = P n n P n n H nr e,nh n P n n P n+ n = F n P n n F n + F n K f,n S ng n + G n S n K f,nf n + G n (Q n S n R e,ns n)g n J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 8 Time and Measurement Updates =ˆx m.u. ˆx t.u. ˆx m.u. ˆx t.u. ˆx m.u. ˆx t.u. ˆx... m.u. t.u. m.u. t.u. m.u. t.u. Π = P P P P P P P... Separating the measurement and time updates permits us to obtain two estimates of the state aprioriestimate of ˆx n n before the nth measurement has been obtained a posteriori estimate ˆx n n after the measurement is taken If a measurement is lost (drop out) can just use the predicted value J. McNames Portland State University ECE 9/9 Kalman Filter Ver.. 87

Extended Kalman Filter Derivation Example application: frequency tracking

Extended Kalman Filter Derivation Example application: frequency tracking Extended Kalman Filter Derivation Example application: frequency tracking Nonlinear State Space Model Consider the nonlinear state-space model of a random process x n+ = f n (x n )+g n (x n )u n y n =

More information

Change in Notation. e (i) (n) Note that this is inconsistent with the notation used earlier, c k+1x(n k) e (i) (n) x(n i) ˆx(n i) M

Change in Notation. e (i) (n) Note that this is inconsistent with the notation used earlier, c k+1x(n k) e (i) (n) x(n i) ˆx(n i) M Overview of Linear Prediction Terms and definitions Nonstationary case Stationary case Forward linear prediction Backward linear prediction Stationary processes Exchange matrices Examples Properties Introduction

More information

x(n) = a k x(n k)+w(n) k=1

x(n) = a k x(n k)+w(n) k=1 Autoregressive Models Overview Direct structures Types of estimators Parametric spectral estimation Parametric time-frequency analysis Order selection criteria Lattice structures? Maximum entropy Excitation

More information

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding J. McNames Portland State University ECE 223 FFT Ver. 1.03 1 Fourier Series

More information

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding Fourier Series & Transform Summary x[n] = X[k] = 1 N k= n= X[k]e jkω

More information

Spectrograms Overview Introduction and motivation Windowed estimates as filter banks Uncertainty principle Examples Other estimates

Spectrograms Overview Introduction and motivation Windowed estimates as filter banks Uncertainty principle Examples Other estimates Spectrograms Overview Introduction and motivation Windowed estimates as filter banks Uncertainty principle Examples Other estimates J. McNames Portland State University ECE 3 Spectrograms Ver. 1.1 1 Introduction

More information

J. McNames Portland State University ECE 223 DT Fourier Series Ver

J. McNames Portland State University ECE 223 DT Fourier Series Ver Overview of DT Fourier Series Topics Orthogonality of DT exponential harmonics DT Fourier Series as a Design Task Picking the frequencies Picking the range Finding the coefficients Example J. McNames Portland

More information

Introduction. Spectral Estimation Overview Periodogram Bias, variance, and distribution Blackman-Tukey Method Welch-Bartlett Method Others

Introduction. Spectral Estimation Overview Periodogram Bias, variance, and distribution Blackman-Tukey Method Welch-Bartlett Method Others Spectral Estimation Overview Periodogram Bias, variance, and distribution Blackman-Tukey Method Welch-Bartlett Method Others Introduction R x (e jω ) r x (l)e jωl l= Most stationary random processes have

More information

Overview of Discrete-Time Fourier Transform Topics Handy Equations Handy Limits Orthogonality Defined orthogonal

Overview of Discrete-Time Fourier Transform Topics Handy Equations Handy Limits Orthogonality Defined orthogonal Overview of Discrete-Time Fourier Transform Topics Handy equations and its Definition Low- and high- discrete-time frequencies Convergence issues DTFT of complex and real sinusoids Relationship to LTI

More information

Core Concepts Review. Orthogonality of Complex Sinusoids Consider two (possibly non-harmonic) complex sinusoids

Core Concepts Review. Orthogonality of Complex Sinusoids Consider two (possibly non-harmonic) complex sinusoids Overview of Continuous-Time Fourier Transform Topics Definition Compare & contrast with Laplace transform Conditions for existence Relationship to LTI systems Examples Ideal lowpass filters Relationship

More information

J. McNames Portland State University ECE 223 Sampling Ver

J. McNames Portland State University ECE 223 Sampling Ver Overview of Sampling Topics (Shannon) sampling theorem Impulse-train sampling Interpolation (continuous-time signal reconstruction) Aliasing Relationship of CTFT to DTFT DT processing of CT signals DT

More information

Overview of Sampling Topics

Overview of Sampling Topics Overview of Sampling Topics (Shannon) sampling theorem Impulse-train sampling Interpolation (continuous-time signal reconstruction) Aliasing Relationship of CTFT to DTFT DT processing of CT signals DT

More information

Solutions for examination in TSRT78 Digital Signal Processing,

Solutions for examination in TSRT78 Digital Signal Processing, Solutions for examination in TSRT78 Digital Signal Processing, 2014-04-14 1. s(t) is generated by s(t) = 1 w(t), 1 + 0.3q 1 Var(w(t)) = σ 2 w = 2. It is measured as y(t) = s(t) + n(t) where n(t) is white

More information

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied Linear Signal Models Overview Introduction Linear nonparametric vs. parametric models Equivalent representations Spectral flatness measure PZ vs. ARMA models Wold decomposition Introduction Many researchers

More information

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model 5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n

More information

Windowing Overview Relevance Main lobe/side lobe tradeoff Popular windows Examples

Windowing Overview Relevance Main lobe/side lobe tradeoff Popular windows Examples ing Overview Relevance Main lobe/side lobe tradeoff Popular windows Examples ing In practice, we cannot observe a signal x(n) for n = to n = Thus we must truncate, or window, the signal: x(n) w(n) ing

More information

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr-2009 1 /

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

Lecture 7: Linear Prediction

Lecture 7: Linear Prediction 1 Lecture 7: Linear Prediction Overview Dealing with three notions: PREDICTION, PREDICTOR, PREDICTION ERROR; FORWARD versus BACKWARD: Predicting the future versus (improper terminology) predicting the

More information

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall 2015 1 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given

More information

Overview of Bode Plots Transfer function review Piece-wise linear approximations First-order terms Second-order terms (complex poles & zeros)

Overview of Bode Plots Transfer function review Piece-wise linear approximations First-order terms Second-order terms (complex poles & zeros) Overview of Bode Plots Transfer function review Piece-wise linear approximations First-order terms Second-order terms (complex poles & zeros) J. McNames Portland State University ECE 222 Bode Plots Ver.

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

Assignment 4 Solutions Continuous-Time Fourier Transform

Assignment 4 Solutions Continuous-Time Fourier Transform Assignment 4 Solutions Continuous-Time Fourier Transform ECE 3 Signals and Systems II Version 1.01 Spring 006 1. Properties of complex numbers. Let c 1 α 1 + jβ 1 and c α + jβ be two complex numbers. a.

More information

ω (rad/s)

ω (rad/s) 1. (a) From the figure we see that the signal has energy content in frequencies up to about 15rad/s. According to the sampling theorem, we must therefore sample with at least twice that frequency: 3rad/s

More information

Notes perso. - cb1 1. Version 3 - November u(t)e i2πft dt u(t) =

Notes perso. - cb1 1. Version 3 - November u(t)e i2πft dt u(t) = Notes perso. - cb1 1 Tutorial : fft, psd & coherence with Matlab Version 3 - November 01 1 Fourier transform by Fast Fourier Transform (FFT) Definition of the Fourier transform : û(f) = F[u(t)] = As an

More information

From Fourier Series to Analysis of Non-stationary Signals - X

From Fourier Series to Analysis of Non-stationary Signals - X From Fourier Series to Analysis of Non-stationary Signals - X prof. Miroslav Vlcek December 14, 21 Contents Stationary and non-stationary 1 Stationary and non-stationary 2 3 Contents Stationary and non-stationary

More information

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2: EEM 409 Random Signals Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Consider a random process of the form = + Problem 2: X(t) = b cos(2π t + ), where b is a constant,

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

Statistics Homework #4

Statistics Homework #4 Statistics 910 1 Homework #4 Chapter 6, Shumway and Stoffer These are outlines of the solutions. If you would like to fill in other details, please come see me during office hours. 6.1 State-space representation

More information

The Z-Transform. For a phasor: X(k) = e jωk. We have previously derived: Y = H(z)X

The Z-Transform. For a phasor: X(k) = e jωk. We have previously derived: Y = H(z)X The Z-Transform For a phasor: X(k) = e jωk We have previously derived: Y = H(z)X That is, the output of the filter (Y(k)) is derived by multiplying the input signal (X(k)) by the transfer function (H(z)).

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides

More information

A Tutorial on Recursive methods in Linear Least Squares Problems

A Tutorial on Recursive methods in Linear Least Squares Problems A Tutorial on Recursive methods in Linear Least Squares Problems by Arvind Yedla 1 Introduction This tutorial motivates the use of Recursive Methods in Linear Least Squares problems, specifically Recursive

More information

Solutions. Chapter 5. Problem 5.1. Solution. Consider the driven, two-well Duffing s oscillator. which can be written in state variable form as

Solutions. Chapter 5. Problem 5.1. Solution. Consider the driven, two-well Duffing s oscillator. which can be written in state variable form as Chapter 5 Solutions Problem 5.1 Consider the driven, two-well Duffing s oscillator which can be written in state variable form as ẍ + ɛγẋ x + x 3 = ɛf cos(ωt) ẋ = v v = x x 3 + ɛ( γv + F cos(ωt)). In the

More information

12. Prediction Error Methods (PEM)

12. Prediction Error Methods (PEM) 12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2 Unit roots in vector time series A. Vector autoregressions with unit roots Scalar autoregression True model: y t y t y t p y tp t Estimated model: y t c y t y t y t p y tp t Results: T j j is asymptotically

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

HEAGAN & CO., OPP. f>, L. & W. DEPOT, DOYER, N. J, OUR MOTTO! ould Iwv ia immediate vltlui. VEEY BEST NEW Creamery Butter 22c ib,

HEAGAN & CO., OPP. f>, L. & W. DEPOT, DOYER, N. J, OUR MOTTO! ould Iwv ia immediate vltlui. VEEY BEST NEW Creamery Butter 22c ib, #4 NN N G N N % XX NY N Y FY N 2 88 N 28 k N k F P X Y N Y /» 2«X ««!!! 8 P 3 N 0»9! N k 25 F $ 60 $3 00 $3000 k k N 30 Y F00 6 )P 0» «{ N % X zz» «3 0««5 «N «XN» N N 00/ N 4 GN N Y 07 50 220 35 2 25 0

More information

INFINITE-IMPULSE RESPONSE DIGITAL FILTERS Classical analog filters and their conversion to digital filters 4. THE BUTTERWORTH ANALOG FILTER

INFINITE-IMPULSE RESPONSE DIGITAL FILTERS Classical analog filters and their conversion to digital filters 4. THE BUTTERWORTH ANALOG FILTER INFINITE-IMPULSE RESPONSE DIGITAL FILTERS Classical analog filters and their conversion to digital filters. INTRODUCTION 2. IIR FILTER DESIGN 3. ANALOG FILTERS 4. THE BUTTERWORTH ANALOG FILTER 5. THE CHEBYSHEV-I

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

CMSC 426 Problem Set 2

CMSC 426 Problem Set 2 CMSC 426 Problem Set 2 Lorin Hochstein - 1684386 March 3, 23 1 Convolution with Gaussian Claim Let g(t, σ 2 ) be a Gaussian kernel, i.e. ( ) g(t, σ 2 1 ) = exp t2 2πσ 2 2σ 2 Then for any function x(t),

More information

Lecture Note 12: Kalman Filter

Lecture Note 12: Kalman Filter ECE 645: Estimation Theory Spring 2015 Instructor: Prof. Stanley H. Chan Lecture Note 12: Kalman Filter LaTeX prepared by Stylianos Chatzidakis) May 4, 2015 This lecture note is based on ECE 645Spring

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Fourier Methods in Digital Signal Processing Final Exam ME 579, Spring 2015 NAME

Fourier Methods in Digital Signal Processing Final Exam ME 579, Spring 2015 NAME Fourier Methods in Digital Signal Processing Final Exam ME 579, Instructions for this CLOSED BOOK EXAM 2 hours long. Monday, May 8th, 8-10am in ME1051 Answer FIVE Questions, at LEAST ONE from each section.

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm D.R. Brown III WPI WPI D.R. Brown III 1 / 19 Problem Statement and Assumptions known input x[n] unknown system (assumed

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

A Gentle Introduction to Gradient Boosting. Cheng Li College of Computer and Information Science Northeastern University

A Gentle Introduction to Gradient Boosting. Cheng Li College of Computer and Information Science Northeastern University A Gentle Introduction to Gradient Boosting Cheng Li chengli@ccs.neu.edu College of Computer and Information Science Northeastern University Gradient Boosting a powerful machine learning algorithm it can

More information

LAB 2: DTFT, DFT, and DFT Spectral Analysis Summer 2011

LAB 2: DTFT, DFT, and DFT Spectral Analysis Summer 2011 University of Illinois at Urbana-Champaign Department of Electrical and Computer Engineering ECE 311: Digital Signal Processing Lab Chandra Radhakrishnan Peter Kairouz LAB 2: DTFT, DFT, and DFT Spectral

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

2D Plotting with Matlab

2D Plotting with Matlab GEEN 1300 Introduction to Engineering Computing Class Meeting #22 Monday, Nov. 9 th Engineering Computing and Problem Solving with Matlab 2-D plotting with Matlab Script files User-defined functions Matlab

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Derivation of the Kalman Filter

Derivation of the Kalman Filter Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P

More information

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t

More information

Conditions for successful data assimilation

Conditions for successful data assimilation Conditions for successful data assimilation Matthias Morzfeld *,**, Alexandre J. Chorin *,**, Peter Bickel # * Department of Mathematics University of California, Berkeley ** Lawrence Berkeley National

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

EEO 401 Digital Signal Processing Prof. Mark Fowler

EEO 401 Digital Signal Processing Prof. Mark Fowler EEO 4 Digital Signal Processing Pro. Mark Fowler ote Set # Using the DFT or Spectral Analysis o Signals Reading Assignment: Sect. 7.4 o Proakis & Manolakis Ch. 6 o Porat s Book /9 Goal o Practical Spectral

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Assignment 6, Math 575A

Assignment 6, Math 575A Assignment 6, Math 575A Part I Matlab Section: MATLAB has special functions to deal with polynomials. Using these commands is usually recommended, since they make the code easier to write and understand

More information

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else ECE 450 Homework #3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3 4 5

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Final Exam January 31, Solutions

Final Exam January 31, Solutions Final Exam January 31, 014 Signals & Systems (151-0575-01) Prof. R. D Andrea & P. Reist Solutions Exam Duration: Number of Problems: Total Points: Permitted aids: Important: 150 minutes 7 problems 50 points

More information

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering Advanced Digital Signal rocessing (18-792) Spring Fall Semester, 201 2012 Department of Electrical and Computer Engineering ROBLEM SET 8 Issued: 10/26/18 Due: 11/2/18 Note: This problem set is due Friday,

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

High-resolution Parametric Subspace Methods

High-resolution Parametric Subspace Methods High-resolution Parametric Subspace Methods The first parametric subspace-based method was the Pisarenko method,, which was further modified, leading to the MUltiple SIgnal Classification (MUSIC) method.

More information

Classic Time Series Analysis

Classic Time Series Analysis Classic Time Series Analysis Concepts and Definitions Let Y be a random number with PDF f Y t ~f,t Define t =E[Y t ] m(t) is known as the trend Define the autocovariance t, s =COV [Y t,y s ] =E[ Y t t

More information

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets 2 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 2-5, 2 Prediction, filtering and smoothing using LSCR: State estimation algorithms with

More information

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Open Economy Macroeconomics: Theory, methods and applications

Open Economy Macroeconomics: Theory, methods and applications Open Economy Macroeconomics: Theory, methods and applications Lecture 4: The state space representation and the Kalman Filter Hernán D. Seoane UC3M January, 2016 Today s lecture State space representation

More information

HOMEWORK 3: Phase portraits for Mathmatical Models, Analysis and Simulation, Fall 2010 Report due Mon Oct 4, Maximum score 7.0 pts.

HOMEWORK 3: Phase portraits for Mathmatical Models, Analysis and Simulation, Fall 2010 Report due Mon Oct 4, Maximum score 7.0 pts. HOMEWORK 3: Phase portraits for Mathmatical Models, Analysis and Simulation, Fall 2010 Report due Mon Oct 4, 2010. Maximum score 7.0 pts. Three problems are to be solved in this homework assignment. The

More information

Understanding and Application of Kalman Filter

Understanding and Application of Kalman Filter Dept. of Aerospace Engineering April 2, 2016 Contents I Introduction II Kalman Filter Algorithm III Simulation IV Future plans Kalman Filter? Rudolf E.Kálmán(1930 ) 1960 년대루돌프칼만에의해 개발 Nosie 가포함된역학시스템 상태를재귀필터를이용해

More information

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Math 515 Fall, 2008 Homework 2, due Friday, September 26. Math 515 Fall, 2008 Homework 2, due Friday, September 26 In this assignment you will write efficient MATLAB codes to solve least squares problems involving block structured matrices known as Kronecker

More information

ECE531 Lecture 10b: Dynamic Parameter Estimation: System Model

ECE531 Lecture 10b: Dynamic Parameter Estimation: System Model ECE531 Lecture 10b: Dynamic Parameter Estimation: System Model D. Richard Brown III Worcester Polytechnic Institute 02-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 02-Apr-2009 1 / 14 Introduction

More information

Chapter 2 Time-Domain Representations of LTI Systems

Chapter 2 Time-Domain Representations of LTI Systems Chapter 2 Time-Domain Representations of LTI Systems 1 Introduction Impulse responses of LTI systems Linear constant-coefficients differential or difference equations of LTI systems Block diagram representations

More information

Chapter 10. Timing Recovery. March 12, 2008

Chapter 10. Timing Recovery. March 12, 2008 Chapter 10 Timing Recovery March 12, 2008 b[n] coder bit/ symbol transmit filter, pt(t) Modulator Channel, c(t) noise interference form other users LNA/ AGC Demodulator receive/matched filter, p R(t) sampler

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

EE266 Homework 3 Solutions

EE266 Homework 3 Solutions EE266, Spring 2014-15 Professor S. Lall EE266 Homework 3 Solutions 1. Second passage time. In this problem we will consider the following Markov chain. Note that self-loops are omitted from this figure.

More information

Assignment 2 Solutions Fourier Series

Assignment 2 Solutions Fourier Series Assignment 2 Solutions Fourier Series ECE 223 Signals and Systems II Version.2 Spring 26. DT Fourier Series. Consider the following discrete-time periodic signal n = +7l n =+7l x[n] = n =+7l Otherwise

More information

EE363 homework 5 solutions

EE363 homework 5 solutions EE363 Prof. S. Boyd EE363 homework 5 solutions 1. One-step ahead prediction of an autoregressive time series. We consider the following autoregressive (AR) system p t+1 = αp t + βp t 1 + γp t 2 + w t,

More information

MATH 552 Spectral Methods Spring Homework Set 5 - SOLUTIONS

MATH 552 Spectral Methods Spring Homework Set 5 - SOLUTIONS MATH 55 Spectral Methods Spring 9 Homework Set 5 - SOLUTIONS. Suppose you are given an n n linear system Ax = f where the matrix A is tridiagonal b c a b c. A =.........,. a n b n c n a n b n with x =

More information

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs Engineering Mathematics 05 SUBJECT NAME : Probability & Random Process SUBJECT CODE : MA6 MATERIAL NAME : University Questions MATERIAL CODE : JM08AM004 REGULATION : R008 UPDATED ON : Nov-Dec 04 (Scan

More information

Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A.

Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A. UvA-DARE (Digital Academic Repository) Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A. Link to publication Citation for published version (APA): Pyayt, A.

More information