x(n) = a k x(n k)+w(n) k=1

Size: px
Start display at page:

Download "x(n) = a k x(n k)+w(n) k=1"

Transcription

1 Autoregressive Models Overview Direct structures Types of estimators Parametric spectral estimation Parametric time-frequency analysis Order selection criteria Lattice structures? Maximum entropy Excitation with line spectra Direct Structures P x(n) = a k x(n k)+w(n) k= Notation differs (again) from text Essentially all of the techniques that we discussed for FIR filters can be applied Many ways to estimate How to estimate the autocorrelation matrix? Degree of windowing (full, pre-, post-, no/short) Weighted least squares? J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 2 x(n) = Properties P a k x(n k)+w(n) k= If ˆR is positive definite (and Toeplitz?), the model will be Stable Causal Minimum phase Invertible True of all the estimators except the proposed unbiased technique Problem Unification P x(n) = a k x(n k)+w(n) k= AR modeling is approximately equivalent to several other useful problems Estimating the coefficients of a whitening filter One-step ahead prediction Maximum entropy signal modeling In the MSE case if the process is minimum phase, these are exactly equivalent J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 3 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 4

2 Windowing We have discussed three types of windowing Data Windowing: x w (n) =w(n)x(n) Used to reduce spectral leakage of nonparametric PSD estimators Correlation Windowing: r w (l) =ˆr(l)w(l) Used to reduce variance of nonparametric PSD estimators Weighted Least Squares: E e = N n= w2 n y(n) c T x(n) 2 Used to weight the influence of some observations more than others Optimal when error variance is non-constant in the deterministic data matrix case Autoregressive Estimation Windowing Parametric windowing is mostly reserved for nonstationary applications Time-frequency analysis Text seems to implicity suggest using data windowing This is a bad idea! Biases the estimate No obvious gain Is much better to perform a weighted least squares Each row of the data matrix and output still correspond to a specific time Estimate is unbiased Permits you to weight the influence of points near the center of the observation window (block) more heavily Can be used with any non-negative window (need not be positive definite) J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 5 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 6 Frequency Domain Representation of the Error Signal x(n) =h(n) w(n) e(n) =ŵ(n) = ĥi(n) x(n) E(e jω )=Ĥi(e jω )X(e jω )= X(ejω ) Ĥ(e jω ) E = e(n) 2 = π X(e jω ) 2 Ĥ(ejω ) dω 2 n= In the AR case, H(z) = A(z) Note the frequency domain equations only exist if the signals are energy signals Segments of stationary processes This means solving for the coefficients that minimize the error also minimize the integral of the ratio of the ESDs AR Spectral Estimation E = e(n) 2 = π X(e jω ) 2 n= Ĥ(ejω ) dω = π X(e jω ) 2 2 Â(ejω ) 2 dω This means solving for the coefficients that minimize the error also minimize the integral of the ratio of the ESDs Why can t we just make Ĥi(e jω ) large or Â(ejω ) small at all frequencies? Recall the constraint a =in â = [ ] â... â P a k = π A(e jω )e jωk dω a = π A(e jω )dω Thus A(e jω ) is constrained to have unit area J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 7 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 8

3 AR Spectral Estimation Properties E = e(n) 2 = π X(e jω ) 2 n= Ĥ(ejω ) dω = π X(e jω ) 2 2 Â(ejω ) 2 dω The frequency domain representation of the error makes it clear that the impact of the error across the full frequency range in uniform There is no benefit to fitting lower or higher frequency ranges more accurately, in general The regions where X(e jω ) > Ĥ(ejω ) contribute more to the total error The error will be minimized if Ĥ(ejω ) is larger in these regions This is part of the reason the estimate is more accurate near spectral peaks, than valleys Nonparametric estimators were also more accurate near peaks, but for very different reasons (spectral leakage) Bias-Variance Tradeoff The order of the parametric model is the key parameter that controls the tradeoff between bias and variance P too large The variance is manifest differently than in nonparametric estimators Spectrum may contain spurious speaks Also possible for a single frequency component to be split into distinct peaks P too small Insufficient peaks Peaks that are present are too wide or have the wrong shape Can only do so much with a pair of complex-conjugate poles J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 9 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. Unbiased Biased Order Selection Problem ˆ MSE u = N f N i + P ˆ MSE b = N f N i + N f N f n=n i e(n) 2 n=n i e(n) 2 Order Selection Methods Concept There are many order selection criteria All try to obtain an approximately unbiased estimate of the MSE All essentially add a penalty as the unscaled error increases Goal: Select the value of P that minimizes the MSE We have two estimates of the MSE One from Chapter 8 and one from Chapter 9 The one from Chapter 8 was unbiased Why don t we use it to obtain the best value of P? Only holds in the deterministic data matrix case The data matrix is always stochastic with AR models J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 2

4 Possible Order Selection Methods Let N t N f N i +.Then Model Error ˆσ e 2 N t N f n=n i e(n) 2 Final Prediction Error FPE(P )= N t + P N t P ˆσ2 e Akaike Information Criterion AIC(P )=N t log ˆσ e 2 +2P Minimum Description Length MDL(P )=N t log ˆσ e 2 + P log N t Criterion Autoregressive Transfer CAT(P )= N t P k= N t k N tˆσ 2 e N t P N tˆσ 2 e Order Selection Methods Comments The order selection decision is only critical when P is on the same order as N t,sayn t / P<N t This is the difficult case of too little data to make a good decision None of the order selection criteria work good in this case Otherwise Similar performance will be obtained for a wide range of values of P Can simply pick the value where the estimated parameter vector stops changing with increasing values of P Text suggests looking at diagnostic plots (residuals, ACF, PACS, etc.) and selecting the parameter manually Works well in many applications J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 3 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 4 Maximum Entropy Motivation This discussion based on [] Suppose again that we have N observations of a random process x(n) If we knew the autocorrelation, we could calculate the AR parameters exactly (recall chapters 4 and 6) With only N observations, at most we can estimate r(l) only for l <N Many signals have autocorrelations that are non zero for l N The segmentation may significantly impair the accuracy of the estimated parameters Especially true of narrowband processes Nonparametric PSD estimation methods simply extrapolate the estimate r(l) with zeros: ˆr(l) =for l N Can we do better? Maximum Entropy Problem Suppose we know (i.e., have estimated) the autocorrelation of an WSS process for l P How do we extrapolate the estimated autocorrelation sequence for l>p? Let us denote the extrapolated values by r e (l) Then the estimated PSD is given by ˆR x (e jω = P l= P r x (l)e jlω + l >P r e (l)e jlω We would like ˆR x (e jω to have the same properties as a real PSD Real-valued Nonnegative These conditions are not sufficient for a unique extrapolation Need an additional constraint J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 5 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 6

5 ˆR x (e jω = Maximum Entropy Concept P l= P r x (l)e jlω + l >P r e (l)e jlω Pick r e (l) such that the entropy of the process is maximized Entropy is a measure of randomness or uncertainty Maximizing the entropy is equivalent to Making the (estimated) process x(n) as white as possible Making the (estimated) PSD as flat as possible Places as few constraints as possible on x(n) Minimizes the amount of statistical structure Entropy Defined For a Gaussian random process with PSD R x (e jω ), entropy is given by H(x) = π ln R x (e jω )dω The maximum entropy estimate is the one that maximizes this equation However, we have constraints π ln R x (e jω )e jlω dω = r x (l) l P Can solve by setting the partial derivative of H(x) with respect to re(l) to zero This is a tricky because we re optimizing a function of a complex-valued parameter Details are in the appendix of the text For now assume re(l) is real valued J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 7 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 8 Maximum Entropy Estimate Derived ˆR x (e jω )= P H(x) = l= P π H(x) re(l) == = r x (l)e jlω + π π l >P ln R x (e jω )dω r e (l)e jlω R x (e jω ) R x (e jω ) re(l) dω R x (e jω ) ejlω dω A x (e jω ) R x (e jω ) = π A x (e jω )e jlω dω Maximum Entropy Estimate Properties Q x (e jω ) R x (e jω )= =q(l) = R x (e jω ) = π P l= P P l= P q(l)e jlω Q x (e jω )e jlω dω q(l)e jlω The inverse DTFT of the signal PSD R x (e jω ) is zero for l N Thus the maximum entropy PSD estimate for a Gaussian process is an all-pole power spectrum Can obtain a from the Yule-Walker equations with the know (or estimated) autocorrelation sequence for l N If the autocorrelation sequence is of interest, it can be obtained using the Yule-Walker equation as well (covered in ECE 5/638) J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 9 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 2

6 Maximum Entropy Discussion Using maximum entropy for stationary process estimation (PSD, autocorrelation, etc.) has been thoroughly discussed and debated Rationale In the absence of any information, impose the least amount of structure on x(n) More reasonable than setting r(l) =for large lags, as the nonparametric methods assume (?) Counter-argument Maximum entropy approach imposes an all-pole structure If data was not all-pole, how do you know the estimated properties of interest will be accurate? Bottom line: depends on the process (application) Sinusoidal Excitation L x(n) = A k cos(ω k n + φ n ) k= All-pole models can also be fit to signals that consist of sums of sinusoids If L<P, can minimize error criterion E p = d L L k= R x (e jω ) ˆR h (e jω ) Not clear to me what the value of this model is Apparently problematic for nearly periodic signals J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 2 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 22 Example : Autoregressive PSD Estimation of Chirp Use a least-squares approach to autoregressive process estimation to perform a time-frequency analysis of the chirp signal. 5.5 Example : Autoregressive PSD Autoregressive Spectrogram Window:5 s Filter Order: J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 23 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 24

7 Example : Nonparametric PSD Example : Autoregressive PSD.5 Nonparametric Spectrogram Window:5 s Autoregressive Spectrogram Window: s Filter Order: Signal J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 25 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 26 Example : Nonparametric PSD Example : Autoregressive PSD.5 Nonparametric Spectrogram Window: s Autoregressive Spectrogram Window:2 s Filter Order: Signal J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 27 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 28

8 Example : Nonparametric PSD Example : Autoregressive PSD.5 Nonparametric Spectrogram Window:2 s Autoregressive Spectrogram Window:5 s Filter Order: Signal J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 29 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 3 Example : Autoregressive PSD Example : Autoregressive PSD 5.5 Autoregressive Spectrogram Window: s Filter Order: Autoregressive Spectrogram Window:2 s Filter Order: J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 3 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 32

9 .5 Example : Autoregressive PSD Window:5 s Filter Order:5 Estimator:Unbiased/no window Window:Rectangular Example : Autoregressive PSD Window:5 s Filter Order:5 Estimator:Unbiased/no window Window:Blackman J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 33 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 34 Example : Autoregressive PSD Window:5 s Filter Order:5 Estimator:Modified Covariance Window:Rectangular Example : Autoregressive PSD Window:5 s Filter Order:5 Estimator:Modified Covariance Window:Blackman J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 35 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 36

10 Example : MATLAB Code clear; close all; User-Specified Parameters fs = ; Sample rate (Hz) N = ; Number of observations from the process Np = 5; Number of samples to throw away to account for transient k = :N/4; t = (k-.5)/fs; yc = chirp(t,.5,t(end),.45); y = [yc fliplr(yc) yc fliplr(yc)] ; ParametricSpectrograms fid = fopen( ARChirp.tex, w ); if fid==-, error( Could not open ARChirp.tex file for writing ); fo = [2 ]; wl = [5 2]; for c=:fo, for c=:length(wl), AutoregressiveSpectrogram(y,fs,wl(c),fo(c),,); colormap(colorspiral(256)); title(sprintf( Autoregressive Spectrogram Window:d s Filter Order:d,wl(c),fo(c))); FigureSet(, Slides ); AxisSet(8); fn = sprintf( ChirpARO2dL3d,fo(c),wl(c)); print(fn, -depsc ); fprintf(fid, ==========\n ); fprintf(fid, \\newslide\n ); fprintf(fid, \\slideheading{example \\arabic{exc}: Autoregressive PSD}\n ); fprintf(fid, ==========\n ); fprintf(fid, \\hspace{-.5em} \\includegraphics[scale=]{matlab/s}\n\n,fn); if c==, NonparametricSpectrogram(y,fs,wl(c)); colormap(colorspiral(256)); title(sprintf( Nonparametric Spectrogram Window:d s,wl(c))); FigureSet(2, Slides ); AxisSet(8); fn = sprintf( ChirpNPO2dL3d,fo(c),wl(c)); print(fn, -depsc ); fprintf(fid, ==========\n ); fprintf(fid, \\newslide\n ); fprintf(fid, \\slideheading{example \\arabic{exc}: Nonparametric PSD}\n ); fprintf(fid, ==========\n ); fprintf(fid, \\hspace{-.5em} \\includegraphics[scale=]{matlab/s}\n\n,fn); fo = 5; wl = 5; for c=:, for c2=:, et = c; wt = c2; AutoregressiveSpectrogram(y,fs,wl,fo,et,wt); colormap(colorspiral(256)); if et==, estimatortype = Unbiased/no window ; else estimatortype = Modified Covariance ; if wt==, windowtype = Rectangular ; else windowtype = Blackman ; J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 37 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 38 st = sprintf( Window:d s Filter Order:d Estimator:s Window:s,wl,fo,estimatorType,windowType); title(st); FigureSet(, Slides ); AxisSet(8); fn = sprintf( ChirpARO2dL3dEdWd,fo,wl,et,wt); print(fn, -depsc ); fprintf(fid, ==========\n ); fprintf(fid, \\newslide\n ); fprintf(fid, \\slideheading{example \\arabic{exc}: Autoregressive PSD}\n ); fprintf(fid, ==========\n ); fprintf(fid, \\hspace{-.5em} \\includegraphics[scale=]{matlab/s}\n\n,fn); fclose(fid); Example : MATLAB Code function [S,t,f] = AutoregressiveSpectrogram(x,fsa,wla,foa,eta,wta,fra,nfa,nsa,pfa); AutoregressiveSpectrogram: Generates the spectrogram of the signal [S,t,f] = AutoregressiveSpectrogram(x,fs,wl,fo,et,wt,fr,nf,ns,pf); x Input signal. fs Sample rate (Hz). Default = Hz. wl Length of window to use (sec). Default = 24 samples. If a vector, specifies entire window. fo Model order. Default = 3. et Estimator type: =Unbiased/no window, =Modified covariance (default). wt Window type: =Rectangular, =Blackman (default). fr Minimum and maximum frequencies to display (Hz). Default = [ fs/2]. nf Number of frequencies to evaluate (vertical resolution). Default = max(28,round(wl/2)). ns Requested number of times (horizontal pixels) to evaluate Default = min(4,length(x)). pf Plot flag: =none (default), =screen. S Matrix containing the image of the Spectrogram. t Times at which the spectrogram was evaluated (s). f Frequencies at which the spectrogram was evaluated (Hz). Calculates estimates of the spectral content at the specified times using an autoregressive model. The mean of the signal is removed as a preprocessing step. The square root of the power spectral density is calculated and displayed. To limit computation, decimate the signal if necessary to make the upper frequency range approximately equal to half the sampling frequency. If only the window length is specified, the blackman window is J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 39 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 4

11 applied as a data window. The specified window length should be odd. If called with a window with an even number of elements, a zero is appended to make the window odd. Example: Generate the parametric spectrogram of an intracranial pressure signal using a Blackman-Harris window that is 45 s in duration. load ICP.mat; x = decimate(icp,5); AutoregressiveSpectrogram(x,fs/5,3,3); D. G. Manolakis, V. K. Ingle, S. M. Kogon, "Statistical and Adaptive Signal Processing," McGraw-Hill, 2. Version. JM See also specgram, window, decimate, and Spectrogram. Error Checking if nargin<, help AutoregressiveSpectrogram; return; if length(x)==, error( Signal is empty. ); Calculate Basic Signal Statistics nx = length(x); xr = x; mx = mean(x); sx = std(x); Process Function Arguments fs = ; if exist( fsa ) & ~isempty(fsa), fs = fsa; wl = min(24,nx); Default window length if exist( wla ) & ~isempty(wla), wl = max(3,round(wla*fs)); if ~rem(wl,2), wl = wl + ; Make odd fo = 3; Default filter oder if exist( foa ) & ~isempty(foa), fo = foa; if var(x)==, error( Signal is constant. ); if exist( fra ) & length(fra)==, error( Frequency range must be a 2-element vector. ); et = ; if exist( eta ) & ~isempty(eta), et = eta; wt = ; Modified covariance estimation type Blackman data matrix window J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 4 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 42 if exist( wta ) & ~isempty(wta), wt = wta; fmin = ; Lowest frequency to display fmax = fs/2; Highest frequency to display if exist( fra ) & ~isempty(fra), fmin = max(fra(),); fmax = min(fra(2),fs/2); nf = max(2^9,round(wl/2)); if exist( nfa ) & ~isempty(nfa), nf = nfa; ns = min(,nx); if exist( nsa ) & ~isempty(nsa), ns = min(nsa,nx); pf = ; Default - no plotting if nargout==, Plot if no output arguments pf = ; if exist( pfa ) & ~isempty(pfa), pf = pfa; More Error Checking if fo>wl, error( Filter order is larger than the window length. ); Preprocessing switch wt, case, wn = ones(wl-fo,); case, wn = blackman(wl-fo+2); wn = wn(2:end-); Trim off the zeros x = x(:). ; Make into a row vector wn = wn(:); Make into a column vector x = x - mx; Remove mean Variable Allocations & Initialization st = (nx-)/(ns-); Step size (samples) st = max(st,); te = :st:nx; Evaluation times (in samples) nt = length(te); No. evaluation times nz = nf*(fs/(fmax-fmin)); No. window points needed in FFT nz = 2^(ceil(log2(nz))); Convert to power of 2 for FFT b = floor(nz*(fmin/fs))+; Lower frequency bin index b = ceil (nz*(fmax/fs))+; Upper frequency bin index fi = b:b; Frequency indices f = fs*(fi-)/nz; Frequencies nf = length(fi); No. of frequencies that PSD is evaluated at Main loop switch et, case, Short/no window X = zeros(wl-fo,fo); y = zeros(wl-fo,); case, Modified Covariance X = zeros(2*(wl-fo),fo); J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 43 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 44

12 y = zeros(2*(wl-fo),); S = zeros(nf,nt); Power Spectral Density for c = :nt, ic = te(c); Sample index of segment center i = round(ic-wl/2); Index of first segment element i = i + (wl-); Index of last segment element k = (i:i); Collection of indices a = max(-i+,); Number of zeros to append at front a = max(i-nx,); Number of zeros to append at end k = k(+a:wl-a); Subset of valid indice xs = [x()*ones(,a) x(k) x(nx)*ones(,a)]. ; Data segment Create the Target Vector and Data Matrix (full windowing) y(:wl-fo) = wn.*xs(fo+:wl); for c2=:fo, X(:wl-fo,c2) = wn.*xs(fo-(c2-):wl-c2); if et==, y(wl-fo+(:wl-fo)) = wn.*xs(:wl-fo); for c2=:fo, X(wl-fo+(:wl-fo),c2) = wn.*xs(c2+:wl-fo+c2); ah = -pinv(x)*y; s2w = mean(abs(y + X*ah).^2); ah = [;ah]; dft = s2w./(abs(fft(ah,nz)).^2); Calculate FFT S(:,c) = dft(fi). ; drawnow; t = (te-)/fs; Postprocessing x = xr; t = t(:); Convert to column vector f = f(:); Convert to column vector Plot Results if pf>=, td = ; Time Divider if max(t)>2, td = 6; Use minutes Tmax = (nx-)/fs; if pf~=2, figure; FigureSet(); colormap(jet(256)); fmax = max(f); k = :length(x); tx = (k-)/fs; tp = t; tmax = max(t); tw = [(wl/2-)/fs,(tmax-(wl/2)/fs)]; xl = ; X-axis label if max(t)>2, Convert to minutes tx = tx/6; tp = tp/6; tmax = tmax/6; tw = tw/6; xl = Time (min) ; J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 45 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 46 end Parametric Spectrogram ha = axes( Position,[ ]); s = reshape(abs(s),nf*nt,); p = [ prctile(s,98)]; imagesc(tp,f,abs(s),p);,[smin Smax]); xlim([ tmax]); ylim([fmin fmax]); set(ha, YTickLabel,[]); set(ha, XAxisLocation, Top ); set(ha, YDir, normal ); hold on; h = plot([;]*tw,[ fs], k ); set(h, LineWidth,2.); h = plot([;]*tw,[ fs], w ); set(h, LineWidth,.); hold off; Colorbar ha2 = axes( Position,[ ]); colorbar(ha2); set(ha2, Box, Off ) set(ha2, YTick,[]); Power Spectral Density ha3 = axes( Position,[ ]); psd = mean((abs(s).^2). ).^(/2); plot(psd,f, r );,[Smin Smax]); ylim([fmin fmax]); xlim([ max(psd)]); ylabel( ); set(gca, XAxisLocation, Top ); Signal ha4 = axes( Position,[.5..8.]); h = plot(tx,x); set(h, LineWidth,.5); ymin = min(x); ymax = max(x); yrng = ymax-ymin; ymin = ymin -.2*yrng; ymax = ymax +.2*yrng; xlim([ tmax]); ylim([ymin ymax]); xlabel(xl); axes(ha); AxisSet; Process Return Arguments if nargout==, clear( S, t, f ); J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 47 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 48

13 Summary AR models of stochastic processes have many advantages Least squares estimates reduce to solving a linear set of equations Properties of estimators known Several good choices for estimating properties Can weight observations They are also used for many nearly equivalent problems Whitening One-step ahead linear prediction Parametric process modeling Maximum entropy process/psd/autocorrelation estimation References [] Monson H. Hayes. Statistical Digital Signal Processing and Modeling. John Wiley & Sons, Inc., 996. J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 49 J. McNames Portland State University ECE 539/639 Autoregressive Models Ver.. 5

Spectrograms Overview Introduction and motivation Windowed estimates as filter banks Uncertainty principle Examples Other estimates

Spectrograms Overview Introduction and motivation Windowed estimates as filter banks Uncertainty principle Examples Other estimates Spectrograms Overview Introduction and motivation Windowed estimates as filter banks Uncertainty principle Examples Other estimates J. McNames Portland State University ECE 3 Spectrograms Ver. 1.1 1 Introduction

More information

Extended Kalman Filter Derivation Example application: frequency tracking

Extended Kalman Filter Derivation Example application: frequency tracking Extended Kalman Filter Derivation Example application: frequency tracking Nonlinear State Space Model Consider the nonlinear state-space model of a random process x n+ = f n (x n )+g n (x n )u n y n =

More information

x n C l 1 u n C m 1 y n C p 1 v n C p 1

x n C l 1 u n C m 1 y n C p 1 v n C p 1 Kalman Filter Derivation Examples Time and Measurement Updates x u n v n, State Space Model Review x n+ = F n x n + G n u n x u k v k y n = H n x n + v n Π = Q n δ nk S n δ nk Snδ nk R n δ nk F n C l l

More information

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding Fourier Series & Transform Summary x[n] = X[k] = 1 N k= n= X[k]e jkω

More information

Practical Spectral Estimation

Practical Spectral Estimation Digital Signal Processing/F.G. Meyer Lecture 4 Copyright 2015 François G. Meyer. All Rights Reserved. Practical Spectral Estimation 1 Introduction The goal of spectral estimation is to estimate how the

More information

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding

Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding Fast Fourier Transform Discrete-time windowing Discrete Fourier Transform Relationship to DTFT Relationship to DTFS Zero padding J. McNames Portland State University ECE 223 FFT Ver. 1.03 1 Fourier Series

More information

Overview of Discrete-Time Fourier Transform Topics Handy Equations Handy Limits Orthogonality Defined orthogonal

Overview of Discrete-Time Fourier Transform Topics Handy Equations Handy Limits Orthogonality Defined orthogonal Overview of Discrete-Time Fourier Transform Topics Handy equations and its Definition Low- and high- discrete-time frequencies Convergence issues DTFT of complex and real sinusoids Relationship to LTI

More information

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied Linear Signal Models Overview Introduction Linear nonparametric vs. parametric models Equivalent representations Spectral flatness measure PZ vs. ARMA models Wold decomposition Introduction Many researchers

More information

J. McNames Portland State University ECE 223 DT Fourier Series Ver

J. McNames Portland State University ECE 223 DT Fourier Series Ver Overview of DT Fourier Series Topics Orthogonality of DT exponential harmonics DT Fourier Series as a Design Task Picking the frequencies Picking the range Finding the coefficients Example J. McNames Portland

More information

Part III Spectrum Estimation

Part III Spectrum Estimation ECE79-4 Part III Part III Spectrum Estimation 3. Parametric Methods for Spectral Estimation Electrical & Computer Engineering North Carolina State University Acnowledgment: ECE79-4 slides were adapted

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

Laboratory Project 2: Spectral Analysis and Optimal Filtering

Laboratory Project 2: Spectral Analysis and Optimal Filtering Laboratory Project 2: Spectral Analysis and Optimal Filtering Random signals analysis (MVE136) Mats Viberg and Lennart Svensson Department of Signals and Systems Chalmers University of Technology 412 96

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides

More information

Change in Notation. e (i) (n) Note that this is inconsistent with the notation used earlier, c k+1x(n k) e (i) (n) x(n i) ˆx(n i) M

Change in Notation. e (i) (n) Note that this is inconsistent with the notation used earlier, c k+1x(n k) e (i) (n) x(n i) ˆx(n i) M Overview of Linear Prediction Terms and definitions Nonstationary case Stationary case Forward linear prediction Backward linear prediction Stationary processes Exchange matrices Examples Properties Introduction

More information

J. McNames Portland State University ECE 223 Sampling Ver

J. McNames Portland State University ECE 223 Sampling Ver Overview of Sampling Topics (Shannon) sampling theorem Impulse-train sampling Interpolation (continuous-time signal reconstruction) Aliasing Relationship of CTFT to DTFT DT processing of CT signals DT

More information

Overview of Sampling Topics

Overview of Sampling Topics Overview of Sampling Topics (Shannon) sampling theorem Impulse-train sampling Interpolation (continuous-time signal reconstruction) Aliasing Relationship of CTFT to DTFT DT processing of CT signals DT

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing Linear Stochastic Models Special Types of Random Processes: AR, MA, and ARMA Digital Signal Processing Department of Electrical and Electronic Engineering, Imperial College d.mandic@imperial.ac.uk c Danilo

More information

Introduction. Spectral Estimation Overview Periodogram Bias, variance, and distribution Blackman-Tukey Method Welch-Bartlett Method Others

Introduction. Spectral Estimation Overview Periodogram Bias, variance, and distribution Blackman-Tukey Method Welch-Bartlett Method Others Spectral Estimation Overview Periodogram Bias, variance, and distribution Blackman-Tukey Method Welch-Bartlett Method Others Introduction R x (e jω ) r x (l)e jωl l= Most stationary random processes have

More information

Contents. Digital Signal Processing, Part II: Power Spectrum Estimation

Contents. Digital Signal Processing, Part II: Power Spectrum Estimation Contents Digital Signal Processing, Part II: Power Spectrum Estimation 5. Application of the FFT for 7. Parametric Spectrum Est. Filtering and Spectrum Estimation 7.1 ARMA-Models 5.1 Fast Convolution 7.2

More information

Core Concepts Review. Orthogonality of Complex Sinusoids Consider two (possibly non-harmonic) complex sinusoids

Core Concepts Review. Orthogonality of Complex Sinusoids Consider two (possibly non-harmonic) complex sinusoids Overview of Continuous-Time Fourier Transform Topics Definition Compare & contrast with Laplace transform Conditions for existence Relationship to LTI systems Examples Ideal lowpass filters Relationship

More information

II. Nonparametric Spectrum Estimation for Stationary Random Signals - Non-parametric Methods -

II. Nonparametric Spectrum Estimation for Stationary Random Signals - Non-parametric Methods - II. onparametric Spectrum Estimation for Stationary Random Signals - on-parametric Methods - - [p. 3] Periodogram - [p. 12] Periodogram properties - [p. 23] Modified periodogram - [p. 25] Bartlett s method

More information

Estimators as Random Variables

Estimators as Random Variables Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until

More information

Windowing Overview Relevance Main lobe/side lobe tradeoff Popular windows Examples

Windowing Overview Relevance Main lobe/side lobe tradeoff Popular windows Examples ing Overview Relevance Main lobe/side lobe tradeoff Popular windows Examples ing In practice, we cannot observe a signal x(n) for n = to n = Thus we must truncate, or window, the signal: x(n) w(n) ing

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall 2015 1 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given

More information

Parametric Method Based PSD Estimation using Gaussian Window

Parametric Method Based PSD Estimation using Gaussian Window International Journal of Engineering Trends and Technology (IJETT) Volume 29 Number 1 - November 215 Parametric Method Based PSD Estimation using Gaussian Window Pragati Sheel 1, Dr. Rajesh Mehra 2, Preeti

More information

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering Advanced Digital Signal rocessing (18-792) Spring Fall Semester, 201 2012 Department of Electrical and Computer Engineering ROBLEM SET 8 Issued: 10/26/18 Due: 11/2/18 Note: This problem set is due Friday,

More information

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Random Signals Analysis (MVE136) Mats Viberg Department of Signals and Systems Chalmers University of Technology 412

More information

Errata and Notes for

Errata and Notes for Errata and Notes for Statistical and Adaptive Signal Processing Dimitris G. Manolakis, Vinay K. Ingle, and Stephen M. Kogon, Artech Housre, Inc., c 2005, ISBN 1580536107. James McNames March 15, 2006 Errata

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

LAB 2: DTFT, DFT, and DFT Spectral Analysis Summer 2011

LAB 2: DTFT, DFT, and DFT Spectral Analysis Summer 2011 University of Illinois at Urbana-Champaign Department of Electrical and Computer Engineering ECE 311: Digital Signal Processing Lab Chandra Radhakrishnan Peter Kairouz LAB 2: DTFT, DFT, and DFT Spectral

More information

SPECTRUM. Deterministic Signals with Finite Energy (l 2 ) Deterministic Signals with Infinite Energy N 1. n=0. N N X N(f) 2

SPECTRUM. Deterministic Signals with Finite Energy (l 2 ) Deterministic Signals with Infinite Energy N 1. n=0. N N X N(f) 2 SPECTRUM Deterministic Signals with Finite Energy (l 2 ) Energy Spectrum: S xx (f) = X(f) 2 = 2 x(n)e j2πfn n= Deterministic Signals with Infinite Energy DTFT of truncated signal: X N (f) = N x(n)e j2πfn

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.34: Discrete-Time Signal Processing OpenCourseWare 006 ecture 8 Periodogram Reading: Sections 0.6 and 0.7

More information

ECSE 512 Digital Signal Processing I Fall 2010 FINAL EXAMINATION

ECSE 512 Digital Signal Processing I Fall 2010 FINAL EXAMINATION FINAL EXAMINATION 9:00 am 12:00 pm, December 20, 2010 Duration: 180 minutes Examiner: Prof. M. Vu Assoc. Examiner: Prof. B. Champagne There are 6 questions for a total of 120 points. This is a closed book

More information

/ (2π) X(e jω ) dω. 4. An 8 point sequence is given by x(n) = {2,2,2,2,1,1,1,1}. Compute 8 point DFT of x(n) by

/ (2π) X(e jω ) dω. 4. An 8 point sequence is given by x(n) = {2,2,2,2,1,1,1,1}. Compute 8 point DFT of x(n) by Code No: RR320402 Set No. 1 III B.Tech II Semester Regular Examinations, Apr/May 2006 DIGITAL SIGNAL PROCESSING ( Common to Electronics & Communication Engineering, Electronics & Instrumentation Engineering,

More information

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach. Wiener filter From Wikipedia, the free encyclopedia In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949. [1] Its purpose is to reduce the

More information

Fourier Analysis of Signals Using the DFT

Fourier Analysis of Signals Using the DFT Fourier Analysis of Signals Using the DFT ECE 535 Lecture April 29, 23 Overview: Motivation Many applications require analyzing the frequency content of signals Speech processing study resonances of vocal

More information

Autoregressive tracking of vortex shedding. 2. Autoregression versus dual phase-locked loop

Autoregressive tracking of vortex shedding. 2. Autoregression versus dual phase-locked loop Autoregressive tracking of vortex shedding Dileepan Joseph, 3 September 2003 Invensys UTC, Oxford 1. Introduction The purpose of this report is to summarize the work I have done in terms of an AR algorithm

More information

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M: Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl

More information

Solutions for examination in TSRT78 Digital Signal Processing,

Solutions for examination in TSRT78 Digital Signal Processing, Solutions for examination in TSRT78 Digital Signal Processing, 2014-04-14 1. s(t) is generated by s(t) = 1 w(t), 1 + 0.3q 1 Var(w(t)) = σ 2 w = 2. It is measured as y(t) = s(t) + n(t) where n(t) is white

More information

The Z-Transform. For a phasor: X(k) = e jωk. We have previously derived: Y = H(z)X

The Z-Transform. For a phasor: X(k) = e jωk. We have previously derived: Y = H(z)X The Z-Transform For a phasor: X(k) = e jωk We have previously derived: Y = H(z)X That is, the output of the filter (Y(k)) is derived by multiplying the input signal (X(k)) by the transfer function (H(z)).

More information

Final Exam January 31, Solutions

Final Exam January 31, Solutions Final Exam January 31, 014 Signals & Systems (151-0575-01) Prof. R. D Andrea & P. Reist Solutions Exam Duration: Number of Problems: Total Points: Permitted aids: Important: 150 minutes 7 problems 50 points

More information

Higher-Order Σ Modulators and the Σ Toolbox

Higher-Order Σ Modulators and the Σ Toolbox ECE37 Advanced Analog Circuits Higher-Order Σ Modulators and the Σ Toolbox Richard Schreier richard.schreier@analog.com NLCOTD: Dynamic Flip-Flop Standard CMOS version D CK Q Q Can the circuit be simplified?

More information

Each problem is worth 25 points, and you may solve the problems in any order.

Each problem is worth 25 points, and you may solve the problems in any order. EE 120: Signals & Systems Department of Electrical Engineering and Computer Sciences University of California, Berkeley Midterm Exam #2 April 11, 2016, 2:10-4:00pm Instructions: There are four questions

More information

LAB 6: FIR Filter Design Summer 2011

LAB 6: FIR Filter Design Summer 2011 University of Illinois at Urbana-Champaign Department of Electrical and Computer Engineering ECE 311: Digital Signal Processing Lab Chandra Radhakrishnan Peter Kairouz LAB 6: FIR Filter Design Summer 011

More information

Lab 4: Quantization, Oversampling, and Noise Shaping

Lab 4: Quantization, Oversampling, and Noise Shaping Lab 4: Quantization, Oversampling, and Noise Shaping Due Friday 04/21/17 Overview: This assignment should be completed with your assigned lab partner(s). Each group must turn in a report composed using

More information

III.C - Linear Transformations: Optimal Filtering

III.C - Linear Transformations: Optimal Filtering 1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients

More information

Basics on 2-D 2 D Random Signal

Basics on 2-D 2 D Random Signal Basics on -D D Random Signal Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Time: Fourier Analysis for -D signals Image enhancement via spatial filtering

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Random Processes With Applications (MVE 135) Mats Viberg Department of Signals and Systems Chalmers University of Technology

More information

! Circular Convolution. " Linear convolution with circular convolution. ! Discrete Fourier Transform. " Linear convolution through circular

! Circular Convolution.  Linear convolution with circular convolution. ! Discrete Fourier Transform.  Linear convolution through circular Previously ESE 531: Digital Signal Processing Lec 22: April 18, 2017 Fast Fourier Transform (con t)! Circular Convolution " Linear convolution with circular convolution! Discrete Fourier Transform " Linear

More information

1. Calculation of the DFT

1. Calculation of the DFT ELE E4810: Digital Signal Processing Topic 10: The Fast Fourier Transform 1. Calculation of the DFT. The Fast Fourier Transform algorithm 3. Short-Time Fourier Transform 1 1. Calculation of the DFT! Filter

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:305:45 CBC C222 Lecture 8 Frequency Analysis 14/02/18 http://www.ee.unlv.edu/~b1morris/ee482/

More information

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 13 CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME When characterizing or modeling a random variable, estimates

More information

! Introduction. ! Discrete Time Signals & Systems. ! Z-Transform. ! Inverse Z-Transform. ! Sampling of Continuous Time Signals

! Introduction. ! Discrete Time Signals & Systems. ! Z-Transform. ! Inverse Z-Transform. ! Sampling of Continuous Time Signals ESE 531: Digital Signal Processing Lec 25: April 24, 2018 Review Course Content! Introduction! Discrete Time Signals & Systems! Discrete Time Fourier Transform! Z-Transform! Inverse Z-Transform! Sampling

More information

EE123 Digital Signal Processing

EE123 Digital Signal Processing EE123 Digital Signal Processing Lecture 1 Time-Dependent FT Announcements! Midterm: 2/22/216 Open everything... but cheat sheet recommended instead 1am-12pm How s the lab going? Frequency Analysis with

More information

! Spectral Analysis with DFT. ! Windowing. ! Effect of zero-padding. ! Time-dependent Fourier transform. " Aka short-time Fourier transform

! Spectral Analysis with DFT. ! Windowing. ! Effect of zero-padding. ! Time-dependent Fourier transform.  Aka short-time Fourier transform Lecture Outline ESE 531: Digital Signal Processing Spectral Analysis with DFT Windowing Lec 24: April 18, 2019 Spectral Analysis Effect of zero-padding Time-dependent Fourier transform " Aka short-time

More information

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c = ENEE630 ADSP Part II w/ solution. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix) R a = 4 4 4,R b = 0 0,R c = j 0 j 0 j 0 j 0 j,r d = 0 0 0

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Summary of lecture 1. E x = E x =T. X T (e i!t ) which motivates us to define the energy spectrum Φ xx (!) = jx (i!)j 2 Z 1 Z =T. 2 d!

Summary of lecture 1. E x = E x =T. X T (e i!t ) which motivates us to define the energy spectrum Φ xx (!) = jx (i!)j 2 Z 1 Z =T. 2 d! Summary of lecture I Continuous time: FS X FS [n] for periodic signals, FT X (i!) for non-periodic signals. II Discrete time: DTFT X T (e i!t ) III Poisson s summation formula: describes the relationship

More information

Summary notes for EQ2300 Digital Signal Processing

Summary notes for EQ2300 Digital Signal Processing Summary notes for EQ3 Digital Signal Processing allowed aid for final exams during 6 Joakim Jaldén, 6-- Prerequisites The DFT and the FFT. The discrete Fourier transform The discrete Fourier transform

More information

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, 2007 Cover Sheet Test Duration: 120 minutes. Open Book but Closed Notes. Calculators allowed!! This test contains five problems. Each of

More information

Fall 2011, EE123 Digital Signal Processing

Fall 2011, EE123 Digital Signal Processing Lecture 6 Miki Lustig, UCB September 11, 2012 Miki Lustig, UCB DFT and Sampling the DTFT X (e jω ) = e j4ω sin2 (5ω/2) sin 2 (ω/2) 5 x[n] 25 X(e jω ) 4 20 3 15 2 1 0 10 5 1 0 5 10 15 n 0 0 2 4 6 ω 5 reconstructed

More information

0 for all other times. Suppose further that we sample the signal with a sampling frequency f ;. We can write x # n =

0 for all other times. Suppose further that we sample the signal with a sampling frequency f ;. We can write x # n = PSD, Autocorrelation, and Noise in MATLAB Aaron Scher Energy and power of a signal Consider a continuous time deterministic signal x t. We are specifically interested in analyzing the characteristics of

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Forecasting 1. Let X and Y be two random variables such that E(X 2 ) < and E(Y 2 )

More information

Test 2 Electrical Engineering Bachelor Module 8 Signal Processing and Communications

Test 2 Electrical Engineering Bachelor Module 8 Signal Processing and Communications Test 2 Electrical Engineering Bachelor Module 8 Signal Processing and Communications (201400432) Tuesday May 26, 2015, 14:00-17:00h This test consists of three parts, corresponding to the three courses

More information

STAD57 Time Series Analysis. Lecture 23

STAD57 Time Series Analysis. Lecture 23 STAD57 Time Series Analysis Lecture 23 1 Spectral Representation Spectral representation of stationary {X t } is: 12 i2t Xt e du 12 1/2 1/2 for U( ) a stochastic process with independent increments du(ω)=

More information

Discrete-Time Signals and Systems. Frequency Domain Analysis of LTI Systems. The Frequency Response Function. The Frequency Response Function

Discrete-Time Signals and Systems. Frequency Domain Analysis of LTI Systems. The Frequency Response Function. The Frequency Response Function Discrete-Time Signals and s Frequency Domain Analysis of LTI s Dr. Deepa Kundur University of Toronto Reference: Sections 5., 5.2-5.5 of John G. Proakis and Dimitris G. Manolakis, Digital Signal Processing:

More information

Aspects of Continuous- and Discrete-Time Signals and Systems

Aspects of Continuous- and Discrete-Time Signals and Systems Aspects of Continuous- and Discrete-Time Signals and Systems C.S. Ramalingam Department of Electrical Engineering IIT Madras C.S. Ramalingam (EE Dept., IIT Madras) Networks and Systems 1 / 45 Scaling the

More information

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES Abstract March, 3 Mads Græsbøll Christensen Audio Analysis Lab, AD:MT Aalborg University This document contains a brief introduction to pitch

More information

Lecture 4 - Spectral Estimation

Lecture 4 - Spectral Estimation Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated

More information

representation of speech

representation of speech Digital Speech Processing Lectures 7-8 Time Domain Methods in Speech Processing 1 General Synthesis Model voiced sound amplitude Log Areas, Reflection Coefficients, Formants, Vocal Tract Polynomial, l

More information

EE 602 TERM PAPER PRESENTATION Richa Tripathi Mounika Boppudi FOURIER SERIES BASED MODEL FOR STATISTICAL SIGNAL PROCESSING - CHONG YUNG CHI

EE 602 TERM PAPER PRESENTATION Richa Tripathi Mounika Boppudi FOURIER SERIES BASED MODEL FOR STATISTICAL SIGNAL PROCESSING - CHONG YUNG CHI EE 602 TERM PAPER PRESENTATION Richa Tripathi Mounika Boppudi FOURIER SERIES BASED MODEL FOR STATISTICAL SIGNAL PROCESSING - CHONG YUNG CHI ABSTRACT The goal of the paper is to present a parametric Fourier

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

2. Typical Discrete-Time Systems All-Pass Systems (5.5) 2.2. Minimum-Phase Systems (5.6) 2.3. Generalized Linear-Phase Systems (5.

2. Typical Discrete-Time Systems All-Pass Systems (5.5) 2.2. Minimum-Phase Systems (5.6) 2.3. Generalized Linear-Phase Systems (5. . Typical Discrete-Time Systems.1. All-Pass Systems (5.5).. Minimum-Phase Systems (5.6).3. Generalized Linear-Phase Systems (5.7) .1. All-Pass Systems An all-pass system is defined as a system which has

More information

x[n] = x a (nt ) x a (t)e jωt dt while the discrete time signal x[n] has the discrete-time Fourier transform x[n]e jωn

x[n] = x a (nt ) x a (t)e jωt dt while the discrete time signal x[n] has the discrete-time Fourier transform x[n]e jωn Sampling Let x a (t) be a continuous time signal. The signal is sampled by taking the signal value at intervals of time T to get The signal x(t) has a Fourier transform x[n] = x a (nt ) X a (Ω) = x a (t)e

More information

Magnitude F y. F x. Magnitude

Magnitude F y. F x. Magnitude Design of Optimum Multi-Dimensional Energy Compaction Filters N. Damera-Venkata J. Tuqan B. L. Evans Imaging Technology Dept. Dept. of ECE Dept. of ECE Hewlett-Packard Labs Univ. of California Univ. of

More information

EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems EL1820 Modeling of Dynamical Systems Lecture 10 - System identification as a model building tool Experiment design Examination and prefiltering of data Model structure selection Model validation Lecture

More information

THE PROCESSING of random signals became a useful

THE PROCESSING of random signals became a useful IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 58, NO. 11, NOVEMBER 009 3867 The Quality of Lagged Products and Autoregressive Yule Walker Models as Autocorrelation Estimates Piet M. T. Broersen

More information

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2: EEM 409 Random Signals Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Consider a random process of the form = + Problem 2: X(t) = b cos(2π t + ), where b is a constant,

More information

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE)

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE) MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE) 15 April 2016 (180 minutes) Professor: R. Kulik Student Number: Name: This is closed book exam. You are allowed to use one double-sided A4 sheet of notes. Only

More information

Overview of Bode Plots Transfer function review Piece-wise linear approximations First-order terms Second-order terms (complex poles & zeros)

Overview of Bode Plots Transfer function review Piece-wise linear approximations First-order terms Second-order terms (complex poles & zeros) Overview of Bode Plots Transfer function review Piece-wise linear approximations First-order terms Second-order terms (complex poles & zeros) J. McNames Portland State University ECE 222 Bode Plots Ver.

More information

Autoregressive (AR) Modelling

Autoregressive (AR) Modelling Autoregressive (AR) Modelling A. Uses of AR Modelling () Alications (a) Seech recognition and coding (storage) (b) System identification (c) Modelling and recognition of sonar, radar, geohysical signals

More information

Discrete Fourier Transform

Discrete Fourier Transform Discrete Fourier Transform Virtually all practical signals have finite length (e.g., sensor data, audio records, digital images, stock values, etc). Rather than considering such signals to be zero-padded

More information

Definition of Discrete-Time Fourier Transform (DTFT)

Definition of Discrete-Time Fourier Transform (DTFT) Definition of Discrete-Time ourier Transform (DTT) {x[n]} = X(e jω ) + n= {X(e jω )} = x[n] x[n]e jωn Why use the above awkward notation for the transform? X(e jω )e jωn dω Answer: It is consistent with

More information

From Fourier Series to Analysis of Non-stationary Signals - X

From Fourier Series to Analysis of Non-stationary Signals - X From Fourier Series to Analysis of Non-stationary Signals - X prof. Miroslav Vlcek December 14, 21 Contents Stationary and non-stationary 1 Stationary and non-stationary 2 3 Contents Stationary and non-stationary

More information

Computer Engineering 4TL4: Digital Signal Processing

Computer Engineering 4TL4: Digital Signal Processing Computer Engineering 4TL4: Digital Signal Processing Day Class Instructor: Dr. I. C. BRUCE Duration of Examination: 3 Hours McMaster University Final Examination December, 2003 This examination paper includes

More information

Voiced Speech. Unvoiced Speech

Voiced Speech. Unvoiced Speech Digital Speech Processing Lecture 2 Homomorphic Speech Processing General Discrete-Time Model of Speech Production p [ n] = p[ n] h [ n] Voiced Speech L h [ n] = A g[ n] v[ n] r[ n] V V V p [ n ] = u [

More information

L6: Short-time Fourier analysis and synthesis

L6: Short-time Fourier analysis and synthesis L6: Short-time Fourier analysis and synthesis Overview Analysis: Fourier-transform view Analysis: filtering view Synthesis: filter bank summation (FBS) method Synthesis: overlap-add (OLA) method STFT magnitude

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

SNR Calculation and Spectral Estimation [S&T Appendix A]

SNR Calculation and Spectral Estimation [S&T Appendix A] SR Calculation and Spectral Estimation [S&T Appendix A] or, How not to make a mess of an FFT Make sure the input is located in an FFT bin 1 Window the data! A Hann window works well. Compute the FFT 3

More information

STAT Financial Time Series

STAT Financial Time Series STAT 6104 - Financial Time Series Chapter 4 - Estimation in the time Domain Chun Yip Yau (CUHK) STAT 6104:Financial Time Series 1 / 46 Agenda 1 Introduction 2 Moment Estimates 3 Autoregressive Models (AR

More information

How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches

How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches Difference Equations (an LTI system) x[n]: input, y[n]: output That is, building a system that maes use of the current and previous

More information

Automatic Autocorrelation and Spectral Analysis

Automatic Autocorrelation and Spectral Analysis Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution

More information

Lecture 17: variance in a band = log(s xx (f)) df (2) If we want to plot something that is more directly representative of variance, we can try this:

Lecture 17: variance in a band = log(s xx (f)) df (2) If we want to plot something that is more directly representative of variance, we can try this: UCSD SIOC 221A: (Gille) 1 Lecture 17: Recap We ve now spent some time looking closely at coherence and how to assign uncertainties to coherence. Can we think about coherence in a different way? There are

More information