Lecture 4 - Spectral Estimation

Size: px
Start display at page:

Download "Lecture 4 - Spectral Estimation"

Transcription

1 Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated by sample times T (i.e. a finite sequence of data). Let f (t) be the continuous signal which is the source of the data. Let N samples be denoted f [],f[1],f[2],...,f[k],...,f[n 1]. The Fourier Transform of the original signal, f (t), would be F (jω) = f (t)e jωt dt We could regard each sample f [k] as an impulse having area f [k]. Then, since 74

2 the integrand exists only at the sample points: F (jω) = (N 1)T o f (t)e jωt dt = f []e j + f [1]e jωt f [k]e jωkt +...f(n 1)e jω(n 1)T ie. F (jω)= N 1 k= f [k]e jωkt We could in principle evaluate this for any ω, butwithonlyn data points to start with, only N final outputs will be significant. You may remember that the continuous Fourier transform could be evaluated over a finite interval (usually the fundamental period T o ) rather than from to + if the waveform was periodic. Similarly,since there are only a finite number of input data points, the DFT treats the data as if it were periodic (i.e. f (N) to f (2N 1) is the same as f () to f (N 1).) Hence the sequence shown below in Fig. 4.1(a) is considered to be one period of the periodic sequence in plot (b). 75

3 1 (a) (b) Figure 4.1: (a) Sequence of N = 1 samples. (b) implicit periodicity in DFT. 76

4 Since the operation treats the data as if it were periodic, we evaluate the 1 DFT equation for the fundamental frequency (one cycle per sequence, NT Hz, 2π NT rad/sec.) and its harmonics (not forgetting the d.c. component (or average) at ω =). i.e. set ω =, 2π NT, 2π 2π 2π 2,... n,... (N 1) NT NT NT or, in general F [n] = N 1 k= f [k]e j 2π N nk (n =:N 1) F [n] is the Discrete Fourier Transform of the sequence f [k]. We may write this equation in matrix form as: F [] F [1] 1 W W 2 W 3... W N 1 f [] 1 W F [2] 2 W 4 W 6... W N 2 f [1] = 1 W. 3 W 6 W 9... W N 3 f [2].. F [N 1] 1 W N 1 W N 2 W N 3 f [N 1]... W where W =exp( j2π/n) and W = W 2N etc. = 1. 77

5 DFT example Let the continuous signal be f (t) = }{{} 5 +2cos(2πt 9 o ) }{{} dc 1Hz +3cos4πt }{{} 2Hz Figure 4.2: Example signal for DFT. Let us sample f (t) at 4 times per second (ie. f s =4Hz)fromt =tot =

6 The values of the discrete samples are given by: f [k] =5+2cos( π 2 k 9o )+3cosπk by putting t = kt s = k 4 i.e. f [] = 8, f [1] = 4, f [2] = 8, f [3] =, (N =4) Therefore F [n] = F [] F [1] F [2] F [3] = 3 f [k]e j π 2 nk = j 1 j j 1 j f [] f [1] f [2] f [3] 3 f [k]( j) nk k= = 2 j4 12 j4 The magnitude of the DFT coefficients is shown below in Fig

7 2 15 F[n] f (Hz) Figure 4.3: DFT of four point sequence. 8

8 Inverse Discrete Fourier Transform The inverse transform of is times the complex conjugate of the original (symmet- i.e. the inverse matrix is 1 N ric) matrix. F [n] = f [k] = 1 N N 1 k= N 1 n= f [k]e j 2π N nk F [n]e +j 2π N nk Note that the F [n]coefficientsarecomplex. We can assume that the f [k] values are real (this is the simplest case; there are situations (e.g. radar) in which two inputs, at each k, are treated as a complex pair, sincetheyaretheoutputsfrom o and 9 o demodulators). In the process of taking the inverse transform the terms F [n] and F [N n] 81

9 (remember that the spectrum is symmetrical about N 2 )combinetoproduce2 frequency components, only one of which is considered to be valid (the one at the lower of the two frequencies, n 2π T Hz where n N 2 ;thehigherfrequency component is at an aliasing frequency (n > N 2 )). From the inverse transform formula, the contribution to f [k] off [n] and F [N n] is: f n [k] = 1 N {F [n]ej 2π N nk + F [N n]e j 2π N (N n)k } (4.2) For allf [k] real, F[N n] = But e j 2π N (N n)k = e} j2πk {{} 1 for all k i.e. F [N n] =F (n) N 1 k= Substituting into the Equation for f n [k] above gives, f [k]e j 2π N (N n)k +j 2πn e N k = e +j 2π N nk (i.e. the complex conjugate) f n [k] = 1 N {F [n]ej 2π N nk + F (n)e j 2π N nk } since e j2πk =1 82

10 ie. f n [k] = 2 N {Re{F [n]} cos 2π N nk Im{F [n]} sin 2π N nk} or f n [k] = 2 2π F [n] cos{( n)kt +arg(f [n])} N NT i.e. a sampled sinewave at 2πn NT Hz, of magnitude 2 N F [n]. For the special case of n =,F [] = f [k] (i.e. sum of all samples) and the contribution of F [] to f [k] isf [k] = 1 NF [] = average of f [k] =d.c. component. 83

11 Interpretation of example 1. F [] = 2 implies a d.c. value of 1 N F [] = 2 4 = 5 (as expected) 2. F [1] = j4 =F [3] implies a fundamental component of peak amplitude 2 N F [1] = = 2 with phase given by arg F [1] = 9 o i.e. 2 cos( 2π NT kt 9 o )=2cos( π 2 k 9 o ) (as expected) 3. F [2] = 12 (n = N 2 component noothern n component here) and this implies a f 2 [k] = 1 N F [2]ej 2π N 2k = 1 4 F [2]ejπk =3cosπk (as expected) since sin πk = for all k Thus, the conventional way of displaying a spectrum is not as shown in Fig. 4.3 but as shown in Fig. 4.4 (obviously, the information content is the same): 84

12 F[n] /sqrt(2) 2 sqrt(2) f (Hz) Figure 4.4: DFT of four point signal. 85

13 1 124 F [512] 2 In typical applications, N is much greater than 4; for example, for N = 124, F [n] has 124 components, but are the complex conjugates of 511 1, leaving F [] 124 as the d.c. component, 2 F [1] 2 F [511] to as complete a.c. components and as the cosine-only component at the highest distinguishable frequency (n = N 2 ). F [n] N F [n] 2 Most computer programmes evaluate (or N for the power spectral density) which gives the correct shape for the spectrum, except for the values at n = and N 2. 86

14 4.1 Discrete Fourier Transform Errors To what degree does the DFT approximate the Fourier transform of the function underlying the data? Clearly the DFT is only an approximation since it provides only for a finite set of frequencies. But how correct are these discrete values themselves? There are two main types of DFT errors: aliasing and leakage : Aliasing This is another manifestation of the phenomenon which we have now encountered several times. If the initial samples are not sufficiently closely spaced to represent high-frequency components present in the underlying function, then the DFT values will be corrupted by aliasing. As before, the solution is either to increase the sampling rate (if possible) or to pre-filter the signal in order to minimise its high-frequency spectral content Leakage Recall that the continuous Fourier transform of a periodic waveform requires the integration to be performed over the interval - to + or over an integer 87

15 number of cycles of the waveform. If we attempt to complete the DFT over a non-integer number of cycles of the input signal, then we might expect the transform to be corrupted in some way. This is indeed the case, as will now be shown. Consider the case of an input signal which is a sinusoid with a fractional number of cycles in the N data samples. The DFT for this case (for n =ton = N 2 )is shown below in F[n] freq Figure 4.5: Leakage. 88

16 We might have expected the DFT to give an output at just the quantised frequencies either side of the true frequency. This certainly does happen but we also find non-zero outputs at all other frequencies. This smearing effect, which is known as leakage, arises because we are effectively calculating the Fourier series for the waveform in Fig. 4.6, which has major discontinuities, hence other frequency components Figure 4.6: Leakage. The repeating waveform has discontinuities. Most sequences of real data are much more complicated than the sinusoidal sequences that we have so far considered and so it will not be possible to avoid introducing discontinuities when using a finite number of points from the sequence in order to calculate the DFT. The solution is to use one of the window functions which we encountered in the 89

17 design of FIR filters (e.g. the Hamming or Hanning windows). These window functions taper the samples towards zero values at both endpoints, and so there is no discontinuity (or very little, in the case of the Hanning window) with a hypothetical next period. Hence the leakage of spectral content away from its correct location is much reduced, as in Fig (a) 5 (b) Figure 4.7: Leakage is reduced using a Hanning window. 9

18 Stochastic Models 4.2 Introduction We now discuss autocorrelation and autoregressive processes; that is, the correlation between successive values of a time series and the linear relations between them. We also show how these models can be used for spectral estimation. 4.3 Autocorrelation Given a time series x t we can produce a lagged version of the time series x t T which lags the original by T samples. We can then calculate the covariance between the two signals σ xx (T )= 1 N 1 N (x t T µ x )(x t µ x ) (4.3) t=1 where µ x is the signal mean and there are N samples. We can then plot σ xx (T ) as a function of T. This is known as the autocovariance function. The autocor- 91

19 relation function is a normalised version of the autocovariance r xx (T )= σ xx(t ) σ xx () (4.4) Note that σ xx () = σ 2 x. We also have r xx () = 1. Also, because σ xy = σ yx we have r xx (T )=r xx ( T ); the autocorrelation (and autocovariance) are symmetric functions or even functions. Figure 4.8 shows a signal and a lagged version of it and Figure 4.9 shows the autocorrelation function. 92

20 t Figure 4.8: Signal x t (top) and x t+5 (bottom). The bottom trace leads the top trace by 5 samples. Or we may say it lags the top by -5 samples. 93

21 1.5 r (a) Lag Figure 4.9: Autocorrelation function for x t.noticethenegativecorrelationatlag2andpositivecorrelation at lag 4. Can you see from Figure 4.8 why these should occur? 94

22 4.4 Autoregressive models An autoregressive (AR) model predicts the value of a time series from previous values. A pth order AR model is defined as p x t = x t i a i + e t (4.5) i=1 where a i are the AR coefficients and e t is the prediction error. These errors are assumed to be Gaussian with zero-mean and variance σe. 2 It is also possible to include an extra parameter a to soak up the mean value of the time series. Alternatively, we can first subtract the mean from the data and then apply the zero-mean AR model described above. We would also subtract any trend from the data (such as a linear or exponential increase) as the AR model assumes stationarity. The above expression shows the relation for a single time step. To show the relation for all time steps we can use matrix notation. We can write the AR model in matrix form by making use of the embedding matrix, M, and by writing the signal and AR coefficients as vectors. We now 95

23 illustrate this for p =4. Thisgives x 4 x 3 x 2 x 1 x M = 5 x 4 x 3 x x N 1 x N 2 x N 3 x N 4 (4.6) We can also write the AR coefficients as a vector a = [a 1,a 2,a 3,a 4 ] T,the errors as a vector e =[e 5,e 6,...,e N ] T and the signal itself as a vector X = [x 5,x 6,...,x N ] T.Thisgives x 5 x 6.. x N = which can be compactly written as x 4 x 3 x 2 x 1 x 5 x 4 x 3 x x N 1 x N 2 x N 3 x N 4 a 1 a 2 a 3 a 4 + e 5 e 6.. e N (4.7) X = Ma + e (4.8) The AR model is therefore a special case of the multivariate regression model. The AR coefficients can therefore be computed from the equation â =(M T M) 1 M T X (4.9) 96

24 The AR predictions can then be computed as the vector ˆX = Mâ (4.1) and the error vector is then e = X ˆX. The variance of the noise is then calculated as the variance of the error vector. To illustrate this process we analyse our data set using an AR(4) model. The AR coefficients were estimated to be â =[1.46, 1.8,.6,.186] T (4.11) and the AR predictions are shown in Figure 4.1. The noise variance was estimated to be σe 2 =.79which corresponds to a standard deviation of.28. The variance of the original time series was.3882 giving a signal to noise ratio of ( )/.79 =

25 (a) t (b) t Figure 4.1: (a) Original signal (solid line), X, andpredictions(dottedline),ˆx, fromanar(4)modeland(b) the prediction errors, e. Notice that the variance of the errors is much less than that of the original signal.

26 4.4.1 Relation to autocorrelation The autoregressive model can be written as x t = a 1 x t 1 + a 2 x t a p x t p + e t (4.12) If we multiply both sides by x t k we get x t x t k = a 1 x t 1 x t k + a 2 x t 2 x t k a p x t p x t k + e t x t k (4.13) If we now sum over t and divide by N 1 and assume that the signal is zero mean (if it isn t we can easily make it so, just by subtracting the mean value from every sample) the above equation can be re-written in terms of covariances at different lags σ xx (k) =a 1 σ xx (k 1) + a 2 σ xx (k 2) a p σ xx (k p)+σ e,x (4.14) where the last term σ e,x is the covariance between the noise and the signal. But as the noise is assumed to be independent from the signal σ e,x =. Ifwenow divide every term by the signal variance we get a relation between the correlations at different lags r xx (k) =a 1 r xx (k 1) + a 2 r xx (k 2) a p r xx (k p) (4.15) 99

27 This holds for all lags. For an AR(p) model we can write this relation outfor the first p lags. For p =4 r xx (1) r xx () r xx ( 1) r xx ( 2) r xx ( 3) a 1 r xx (2) r xx (3) = r xx (1) r xx () r xx ( 1) r xx ( 2) a 2 (4.16) r xx (2) r xx (1) r xx () r xx ( 1) a 3 r xx (4) r xx (3) r xx (2) r xx (1) r xx () a 4 which can be compactly written as r = Ra (4.17) where r is the autocorrelation vector and R is the autocorrelation matrix. The above equations are known, after their discoverers, as the Yule-Walker relations. They provide another way to estimate AR coefficients a = R 1 r (4.18) This leads to a more efficient algorithm than the general method for multivariate linear regression (equation 4.9) because we can exploit the structure in the autocorrelation matrix. By noting that r xx (k) =r xx ( k) we can rewrite the 1

28 correlation matrix as R = 1 r xx (1) r xx (2) r xx (3) r xx (1) 1 r xx (1) r xx (2) r xx (2) r xx (1) 1 r xx (1) r xx (3) r xx (2) r xx (1) 1 (4.19) Because this matrix is both symmetric and a Toeplitz matrix (the terms along any diagonal are the same) we can use a recursive estimation technique known as the Levinson-Durbin algorithm. 4.5 Moving Average Models A Moving Average (MA) model of order q is defined as q x t = b i e t i (4.2) i= where e t is Gaussian random noise with zero mean and variance σ 2 e.theyarea type of FIR filter. These can be combined with AR models to get Autoregressive 11

29 Moving Average (ARMA) models p x t = a i x t i + i=1 q b i e t i (4.21) which can be described as an ARMA(p,q) model. They are a type of IIR filter. Usually, however, FIR and IIR filters have a set of fixed coefficients which have been chosen to give the filter particular frequency characteristics. In MA or ARMA modelling the coefficients are tuned to a particular time series so as to capture the spectral characteristics of the underlying process. i= 4.6 Spectral Estimation using AR models Autoregressive models can also be used for spectral estimation. An AR(p)model predicts the next value in a time series as a linear combination of thep previous values p x t = a k x t k + e t (4.22) k=1 where a k are the AR coefficients and e t is IID Gaussian noise with zero mean and variance σ 2. Note the sudden -ve, this is arbitrary and only added to make terms in the spectral equations (as below) +ve. 12

30 The above equation can be solved by using the z-transform. This allows the equation to be written as ( ) p X(z) 1+ a k z k = E(z) (4.23) It can then be rewritten for X(z) as k=1 E(z) X(z) = 1+ p k=1 a (4.24) kz k Taking z =exp(iωt s )whereω is frequency and T s is the sampling period we can see that the frequency domain characteristics of an AR model are givenby (and noting that the power in the noise process is σet 2 s ) σ 2 P (ω) = et s 1+ p k=1 a (4.25) k exp( ikωt s ) 2 An AR(p) model can provide spectral estimates with p/2 peaks; therefore if you know how many peaks you re looking for in the spectrum you can define the AR model order. Alternatively, AR model order estimation methods should automatically provide the appropriate level of smoothing of the estimated spectrum. AR spectral estimation has two distinct advantages over methods based on the Fourier transform (i) power can be estimated over a continuous range of 13

31 (a) (b) Figure 4.11: Power spectral estimates of two sinwaves in additive noise using (a) Discrete Fourier transform method and (b) autoregressive spectral estimation. frequencies (not just at fixed intervals) and (ii) the power estimates have less variance. 14

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment:

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment: Stochastic Processes: I consider bowl of worms model for oscilloscope experiment: SAPAscope 2.0 / 0 1 RESET SAPA2e 22, 23 II 1 stochastic process is: Stochastic Processes: II informally: bowl + drawing

More information

Chapter 4 Discrete Fourier Transform (DFT) And Signal Spectrum

Chapter 4 Discrete Fourier Transform (DFT) And Signal Spectrum Chapter 4 Discrete Fourier Transform (DFT) And Signal Spectrum CEN352, DR. Nassim Ammour, King Saud University 1 Fourier Transform History Born 21 March 1768 ( Auxerre ). Died 16 May 1830 ( Paris ) French

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

L6: Short-time Fourier analysis and synthesis

L6: Short-time Fourier analysis and synthesis L6: Short-time Fourier analysis and synthesis Overview Analysis: Fourier-transform view Analysis: filtering view Synthesis: filter bank summation (FBS) method Synthesis: overlap-add (OLA) method STFT magnitude

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

Classic Time Series Analysis

Classic Time Series Analysis Classic Time Series Analysis Concepts and Definitions Let Y be a random number with PDF f Y t ~f,t Define t =E[Y t ] m(t) is known as the trend Define the autocovariance t, s =COV [Y t,y s ] =E[ Y t t

More information

Fourier Analysis of Signals Using the DFT

Fourier Analysis of Signals Using the DFT Fourier Analysis of Signals Using the DFT ECE 535 Lecture April 29, 23 Overview: Motivation Many applications require analyzing the frequency content of signals Speech processing study resonances of vocal

More information

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models Module 3 Descriptive Time Series Statistics and Introduction to Time Series Models Class notes for Statistics 451: Applied Time Series Iowa State University Copyright 2015 W Q Meeker November 11, 2015

More information

Time Series Examples Sheet

Time Series Examples Sheet Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,

More information

Lecture 3 - Design of Digital Filters

Lecture 3 - Design of Digital Filters Lecture 3 - Design of Digital Filters 3.1 Simple filters In the previous lecture we considered the polynomial fit as a case example of designing a smoothing filter. The approximation to an ideal LPF can

More information

Autoregressive Models Fourier Analysis Wavelets

Autoregressive Models Fourier Analysis Wavelets Autoregressive Models Fourier Analysis Wavelets BFR Flood w/10yr smooth Spectrum Annual Max: Day of Water year Flood magnitude vs. timing Jain & Lall, 2000 Blacksmith Fork, Hyrum, UT Analyses of Flood

More information

Biomedical Signal Processing and Signal Modeling

Biomedical Signal Processing and Signal Modeling Biomedical Signal Processing and Signal Modeling Eugene N. Bruce University of Kentucky A Wiley-lnterscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane Singapore Toronto

More information

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive

More information

Multimedia Signals and Systems - Audio and Video. Signal, Image, Video Processing Review-Introduction, MP3 and MPEG2

Multimedia Signals and Systems - Audio and Video. Signal, Image, Video Processing Review-Introduction, MP3 and MPEG2 Multimedia Signals and Systems - Audio and Video Signal, Image, Video Processing Review-Introduction, MP3 and MPEG2 Kunio Takaya Electrical and Computer Engineering University of Saskatchewan December

More information

SEISMIC WAVE PROPAGATION. Lecture 2: Fourier Analysis

SEISMIC WAVE PROPAGATION. Lecture 2: Fourier Analysis SEISMIC WAVE PROPAGATION Lecture 2: Fourier Analysis Fourier Series & Fourier Transforms Fourier Series Review of trigonometric identities Analysing the square wave Fourier Transform Transforms of some

More information

-Digital Signal Processing- FIR Filter Design. Lecture May-16

-Digital Signal Processing- FIR Filter Design. Lecture May-16 -Digital Signal Processing- FIR Filter Design Lecture-17 24-May-16 FIR Filter Design! FIR filters can also be designed from a frequency response specification.! The equivalent sampled impulse response

More information

Introduction to Signal Processing

Introduction to Signal Processing to Signal Processing Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Intelligent Systems for Pattern Recognition Signals = Time series Definitions Motivations A sequence

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis -- An Introduction -- AMS 586 Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data

More information

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 13 CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME When characterizing or modeling a random variable, estimates

More information

Music 270a: Complex Exponentials and Spectrum Representation

Music 270a: Complex Exponentials and Spectrum Representation Music 270a: Complex Exponentials and Spectrum Representation Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) October 24, 2016 1 Exponentials The exponential

More information

X. Chen More on Sampling

X. Chen More on Sampling X. Chen More on Sampling 9 More on Sampling 9.1 Notations denotes the sampling time in second. Ω s = 2π/ and Ω s /2 are, respectively, the sampling frequency and Nyquist frequency in rad/sec. Ω and ω denote,

More information

EA2.3 - Electronics 2 1

EA2.3 - Electronics 2 1 In the previous lecture, I talked about the idea of complex frequency s, where s = σ + jω. Using such concept of complex frequency allows us to analyse signals and systems with better generality. In this

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides

More information

Automatic Autocorrelation and Spectral Analysis

Automatic Autocorrelation and Spectral Analysis Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution

More information

Time Series I Time Domain Methods

Time Series I Time Domain Methods Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

Practical Spectral Estimation

Practical Spectral Estimation Digital Signal Processing/F.G. Meyer Lecture 4 Copyright 2015 François G. Meyer. All Rights Reserved. Practical Spectral Estimation 1 Introduction The goal of spectral estimation is to estimate how the

More information

Signal Processing Signal and System Classifications. Chapter 13

Signal Processing Signal and System Classifications. Chapter 13 Chapter 3 Signal Processing 3.. Signal and System Classifications In general, electrical signals can represent either current or voltage, and may be classified into two main categories: energy signals

More information

Central to this is two linear transformations: the Fourier Transform and the Laplace Transform. Both will be considered in later lectures.

Central to this is two linear transformations: the Fourier Transform and the Laplace Transform. Both will be considered in later lectures. In this second lecture, I will be considering signals from the frequency perspective. This is a complementary view of signals, that in the frequency domain, and is fundamental to the subject of signal

More information

A system that is both linear and time-invariant is called linear time-invariant (LTI).

A system that is both linear and time-invariant is called linear time-invariant (LTI). The Cooper Union Department of Electrical Engineering ECE111 Signal Processing & Systems Analysis Lecture Notes: Time, Frequency & Transform Domains February 28, 2012 Signals & Systems Signals are mapped

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

Laboratory Project 2: Spectral Analysis and Optimal Filtering

Laboratory Project 2: Spectral Analysis and Optimal Filtering Laboratory Project 2: Spectral Analysis and Optimal Filtering Random signals analysis (MVE136) Mats Viberg and Lennart Svensson Department of Signals and Systems Chalmers University of Technology 412 96

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

Feature extraction 2

Feature extraction 2 Centre for Vision Speech & Signal Processing University of Surrey, Guildford GU2 7XH. Feature extraction 2 Dr Philip Jackson Linear prediction Perceptual linear prediction Comparison of feature methods

More information

Frequency-Domain C/S of LTI Systems

Frequency-Domain C/S of LTI Systems Frequency-Domain C/S of LTI Systems x(n) LTI y(n) LTI: Linear Time-Invariant system h(n), the impulse response of an LTI systems describes the time domain c/s. H(ω), the frequency response describes the

More information

Tutorial Sheet #2 discrete vs. continuous functions, periodicity, sampling

Tutorial Sheet #2 discrete vs. continuous functions, periodicity, sampling 2.39 utorial Sheet #2 discrete vs. continuous functions, periodicity, sampling We will encounter two classes of signals in this class, continuous-signals and discrete-signals. he distinct mathematical

More information

Autoregressive tracking of vortex shedding. 2. Autoregression versus dual phase-locked loop

Autoregressive tracking of vortex shedding. 2. Autoregression versus dual phase-locked loop Autoregressive tracking of vortex shedding Dileepan Joseph, 3 September 2003 Invensys UTC, Oxford 1. Introduction The purpose of this report is to summarize the work I have done in terms of an AR algorithm

More information

Basics on 2-D 2 D Random Signal

Basics on 2-D 2 D Random Signal Basics on -D D Random Signal Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Time: Fourier Analysis for -D signals Image enhancement via spatial filtering

More information

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 6.3. FORECASTING ARMA PROCESSES 123 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss

More information

Chapter 4 The Fourier Series and Fourier Transform

Chapter 4 The Fourier Series and Fourier Transform Chapter 4 The Fourier Series and Fourier Transform Representation of Signals in Terms of Frequency Components Consider the CT signal defined by N xt () = Acos( ω t+ θ ), t k = 1 k k k The frequencies `present

More information

Figure 18: Top row: example of a purely continuous spectrum (left) and one realization

Figure 18: Top row: example of a purely continuous spectrum (left) and one realization 1..5 S(). -.2 -.5 -.25..25.5 64 128 64 128 16 32 requency time time Lag 1..5 S(). -.5-1. -.5 -.1.1.5 64 128 64 128 16 32 requency time time Lag Figure 18: Top row: example o a purely continuous spectrum

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC)

More information

CH.4 Continuous-Time Fourier Series

CH.4 Continuous-Time Fourier Series CH.4 Continuous-Time Fourier Series First step to Fourier analysis. My mathematical model is killing me! The difference between mathematicians and engineers is mathematicians develop mathematical tools

More information

ANALOG AND DIGITAL SIGNAL PROCESSING ADSP - Chapter 8

ANALOG AND DIGITAL SIGNAL PROCESSING ADSP - Chapter 8 ANALOG AND DIGITAL SIGNAL PROCESSING ADSP - Chapter 8 Fm n N fnt ( ) e j2mn N X() X() 2 X() X() 3 W Chap. 8 Discrete Fourier Transform (DFT), FFT Prof. J.-P. Sandoz, 2-2 W W 3 W W x () x () x () 2 x ()

More information

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, 2007 Cover Sheet Test Duration: 120 minutes. Open Book but Closed Notes. Calculators allowed!! This test contains five problems. Each of

More information

Discrete-Time Signals & Systems

Discrete-Time Signals & Systems Chapter 2 Discrete-Time Signals & Systems 清大電機系林嘉文 cwlin@ee.nthu.edu.tw 03-5731152 Original PowerPoint slides prepared by S. K. Mitra 2-1-1 Discrete-Time Signals: Time-Domain Representation (1/10) Signals

More information

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University January 23, 2006 1 Exponentials The exponential is

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

Time Series Examples Sheet

Time Series Examples Sheet Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,

More information

Final Exam January 31, Solutions

Final Exam January 31, Solutions Final Exam January 31, 014 Signals & Systems (151-0575-01) Prof. R. D Andrea & P. Reist Solutions Exam Duration: Number of Problems: Total Points: Permitted aids: Important: 150 minutes 7 problems 50 points

More information

Discrete Fourier Transform

Discrete Fourier Transform Discrete Fourier Transform Virtually all practical signals have finite length (e.g., sensor data, audio records, digital images, stock values, etc). Rather than considering such signals to be zero-padded

More information

3 Theory of stationary random processes

3 Theory of stationary random processes 3 Theory of stationary random processes 3.1 Linear filters and the General linear process A filter is a transformation of one random sequence {U t } into another, {Y t }. A linear filter is a transformation

More information

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c = ENEE630 ADSP Part II w/ solution. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix) R a = 4 4 4,R b = 0 0,R c = j 0 j 0 j 0 j 0 j,r d = 0 0 0

More information

Contents. Digital Signal Processing, Part II: Power Spectrum Estimation

Contents. Digital Signal Processing, Part II: Power Spectrum Estimation Contents Digital Signal Processing, Part II: Power Spectrum Estimation 5. Application of the FFT for 7. Parametric Spectrum Est. Filtering and Spectrum Estimation 7.1 ARMA-Models 5.1 Fast Convolution 7.2

More information

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M: Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl

More information

Vibration Testing. an excitation source a device to measure the response a digital signal processor to analyze the system response

Vibration Testing. an excitation source a device to measure the response a digital signal processor to analyze the system response Vibration Testing For vibration testing, you need an excitation source a device to measure the response a digital signal processor to analyze the system response i) Excitation sources Typically either

More information

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing Linear Stochastic Models Special Types of Random Processes: AR, MA, and ARMA Digital Signal Processing Department of Electrical and Electronic Engineering, Imperial College d.mandic@imperial.ac.uk c Danilo

More information

Lecture 27 Frequency Response 2

Lecture 27 Frequency Response 2 Lecture 27 Frequency Response 2 Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/6/12 1 Application of Ideal Filters Suppose we can generate a square wave with a fundamental period

More information

Part III Spectrum Estimation

Part III Spectrum Estimation ECE79-4 Part III Part III Spectrum Estimation 3. Parametric Methods for Spectral Estimation Electrical & Computer Engineering North Carolina State University Acnowledgment: ECE79-4 slides were adapted

More information

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.34: Discrete-Time Signal Processing OpenCourseWare 006 ecture 8 Periodogram Reading: Sections 0.6 and 0.7

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

ECE 301 Fall 2010 Division 2 Homework 10 Solutions. { 1, if 2n t < 2n + 1, for any integer n, x(t) = 0, if 2n 1 t < 2n, for any integer n.

ECE 301 Fall 2010 Division 2 Homework 10 Solutions. { 1, if 2n t < 2n + 1, for any integer n, x(t) = 0, if 2n 1 t < 2n, for any integer n. ECE 3 Fall Division Homework Solutions Problem. Reconstruction of a continuous-time signal from its samples. Consider the following periodic signal, depicted below: {, if n t < n +, for any integer n,

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

DISCRETE FOURIER TRANSFORM

DISCRETE FOURIER TRANSFORM DISCRETE FOURIER TRANSFORM 1. Introduction The sampled discrete-time fourier transform (DTFT) of a finite length, discrete-time signal is known as the discrete Fourier transform (DFT). The DFT contains

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Covariances of ARMA Processes

Covariances of ARMA Processes Statistics 910, #10 1 Overview Covariances of ARMA Processes 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation

More information

E 4101/5101 Lecture 6: Spectral analysis

E 4101/5101 Lecture 6: Spectral analysis E 4101/5101 Lecture 6: Spectral analysis Ragnar Nymoen 3 March 2011 References to this lecture Hamilton Ch 6 Lecture note (on web page) For stationary variables/processes there is a close correspondence

More information

Definition of a Stochastic Process

Definition of a Stochastic Process Definition of a Stochastic Process Balu Santhanam Dept. of E.C.E., University of New Mexico Fax: 505 277 8298 bsanthan@unm.edu August 26, 2018 Balu Santhanam (UNM) August 26, 2018 1 / 20 Overview 1 Stochastic

More information

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design Marcel Dettling Institute for Data Analysis and Process Design Zurich University of Applied Sciences marcel.dettling@zhaw.ch http://stat.ethz.ch/~dettling ETH Zürich, March 18, 2013 1 Basics of Modeling

More information

Filter Analysis and Design

Filter Analysis and Design Filter Analysis and Design Butterworth Filters Butterworth filters have a transfer function whose squared magnitude has the form H a ( jω ) 2 = 1 ( ) 2n. 1+ ω / ω c * M. J. Roberts - All Rights Reserved

More information

6.12 System Identification The Easy Case

6.12 System Identification The Easy Case 252 SYSTEMS 612 System Identification The Easy Case Assume that someone brings you a signal processing system enclosed in a black box The box has two connectors, one marked input and the other output Other

More information

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes EAS 305 Random Processes Viewgraph 1 of 10 Definitions: Random Processes A random process is a family of random variables indexed by a parameter t T, where T is called the index set λ i Experiment outcome

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

IV. Covariance Analysis

IV. Covariance Analysis IV. Covariance Analysis Autocovariance Remember that when a stochastic process has time values that are interdependent, then we can characterize that interdependency by computing the autocovariance function.

More information

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before.

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before. ACF and PACF of an AR(p) We will only present the general ideas on how to obtain the ACF and PACF of an AR(p) model since the details follow closely the AR(1) and AR(2) cases presented before. Recall that

More information

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet EE538 Final Exam Fall 005 3:0 pm -5:0 pm PHYS 3 Dec. 17, 005 Cover Sheet Test Duration: 10 minutes. Open Book but Closed Notes. Calculators ARE allowed!! This test contains five problems. Each of the five

More information

Discrete time processes

Discrete time processes Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following

More information

Time and Spatial Series and Transforms

Time and Spatial Series and Transforms Time and Spatial Series and Transforms Z- and Fourier transforms Gibbs' phenomenon Transforms and linear algebra Wavelet transforms Reading: Sheriff and Geldart, Chapter 15 Z-Transform Consider a digitized

More information

SNR Calculation and Spectral Estimation [S&T Appendix A]

SNR Calculation and Spectral Estimation [S&T Appendix A] SR Calculation and Spectral Estimation [S&T Appendix A] or, How not to make a mess of an FFT Make sure the input is located in an FFT bin 1 Window the data! A Hann window works well. Compute the FFT 3

More information

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied Linear Signal Models Overview Introduction Linear nonparametric vs. parametric models Equivalent representations Spectral flatness measure PZ vs. ARMA models Wold decomposition Introduction Many researchers

More information

The Discrete-Time Fourier

The Discrete-Time Fourier Chapter 3 The Discrete-Time Fourier Transform 清大電機系林嘉文 cwlin@ee.nthu.edu.tw 03-5731152 Original PowerPoint slides prepared by S. K. Mitra 3-1-1 Continuous-Time Fourier Transform Definition The CTFT of

More information

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Elec461 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Dr. D. S. Taubman May 3, 011 In this last chapter of your notes, we are interested in the problem of nding the instantaneous

More information

National Sun Yat-Sen University CSE Course: Information Theory. Maximum Entropy and Spectral Estimation

National Sun Yat-Sen University CSE Course: Information Theory. Maximum Entropy and Spectral Estimation Maximum Entropy and Spectral Estimation 1 Introduction What is the distribution of velocities in the gas at a given temperature? It is the Maxwell-Boltzmann distribution. The maximum entropy distribution

More information

Thesis Proposal The Short Time Fourier Transform and Local Signals

Thesis Proposal The Short Time Fourier Transform and Local Signals Thesis Proposal The Short Fourier Transform and Local Signals Shuhei Oumura June, 010 1 Introduction/Summary In this thesis, I am going to investigate the properties of the short time Fourier transform

More information

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1 DIGITAL SPECTRAL ANALYSIS WITH APPLICATIONS S.LAWRENCE MARPLE, JR. SUMMARY This new book provides a broad perspective of spectral estimation techniques and their implementation. It concerned with spectral

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

(Refer Slide Time: 01:28 03:51 min)

(Refer Slide Time: 01:28 03:51 min) Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture 40 FIR Design by Windowing This is the 40 th lecture and our topic for

More information

2. An Introduction to Moving Average Models and ARMA Models

2. An Introduction to Moving Average Models and ARMA Models . An Introduction to Moving Average Models and ARMA Models.1 White Noise. The MA(1) model.3 The MA(q) model..4 Estimation and forecasting of MA models..5 ARMA(p,q) models. The Moving Average (MA) models

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Chapter 2 Random Processes

Chapter 2 Random Processes Chapter 2 Random Processes 21 Introduction We saw in Section 111 on page 10 that many systems are best studied using the concept of random variables where the outcome of a random experiment was associated

More information

Parametric Method Based PSD Estimation using Gaussian Window

Parametric Method Based PSD Estimation using Gaussian Window International Journal of Engineering Trends and Technology (IJETT) Volume 29 Number 1 - November 215 Parametric Method Based PSD Estimation using Gaussian Window Pragati Sheel 1, Dr. Rajesh Mehra 2, Preeti

More information

Digital Signal Processing Lecture 10 - Discrete Fourier Transform

Digital Signal Processing Lecture 10 - Discrete Fourier Transform Digital Signal Processing - Discrete Fourier Transform Electrical Engineering and Computer Science University of Tennessee, Knoxville November 12, 2015 Overview 1 2 3 4 Review - 1 Introduction Discrete-time

More information

BME 50500: Image and Signal Processing in Biomedicine. Lecture 5: Correlation and Power-Spectrum CCNY

BME 50500: Image and Signal Processing in Biomedicine. Lecture 5: Correlation and Power-Spectrum CCNY 1 BME 50500: Image and Signal Processing in Biomedicine Lecture 5: Correlation and Power-Spectrum Lucas C. Parra Biomedical Engineering Department CCNY http://bme.ccny.cuny.edu/faculty/parra/teaching/signal-and-image/

More information

Linear models. Chapter Overview. Linear process: A process {X n } is a linear process if it has the representation.

Linear models. Chapter Overview. Linear process: A process {X n } is a linear process if it has the representation. Chapter 2 Linear models 2.1 Overview Linear process: A process {X n } is a linear process if it has the representation X n = b j ɛ n j j=0 for all n, where ɛ n N(0, σ 2 ) (Gaussian distributed with zero

More information