Advanced Signal Processing Linear Stochastic Processes

Size: px
Start display at page:

Download "Advanced Signal Processing Linear Stochastic Processes"

Transcription

1 Advanced Signal Processing Linear Stochastic Processes Danilo Mandic room 83, ext: 4627 Department of Electrical and Electronic Engineering Imperial College London, UK URL: mandic c D. P. Mandic Advanced Signal Processing

2 Aims of this lecture To introduce linear stochastic models for real world data Learn how stochastic models shape up the spectrum of white noise Understand how stochastic processes are created, and to get familiarised with the autocorrelation, variance and power spectrum of such processes Learn how to derive the parameters of linear stochastic ARMA models Introduce special cases: autoregressive (AR), moving average (MA) Stability conditions and model order selection (partial correlations) Optimal model order selection criteria (MDL, AIC,...) Apply stochastic modelling to real world data (speech, environmental, finance), and address the issues of under and over modelling This material is a first fundamental step for the modelling of real world data c D. P. Mandic Advanced Signal Processing 2

3 Example from Lecture re interpretation (stochastic vs deterministic disturbance) function sin(2x) function sin(2x) + 2 * randn 8 5 function sin(2x) ACF of sin(2x) 2 ACF of sin(2x) + 2*randn 35 ACF of sin(2x) Which disturbance is more detrimental: deterministic DC or stochastic noise c D. P. Mandic Advanced Signal Processing 3

4 Justification Wold decomposition theorem (Existence theorem, also mentioned in your coursework) Wold s decomposition theorem plays a central role in time series analysis, and explicitly proves that any covariance stationary time series can be decomposed into two different parts: deterministic (such as a sinewave) and stochastic (filtered WGN). Therefore, a general process can be written a sum of two processes q x[n] = x p [n] + x r [n] = x p [n] + b j w[n j] w white process j= x r [n] regular random process x p [n] predictable process, with x r [n] x p [n], E{x r [m]x p [n]} = we can treat separately the predictable part (e.g. a deterministic sinusoidal signal) and the random signal. Our focus will be on the modelling of the random component NB: Recall the difference between shift invariance and time invariance c D. P. Mandic Advanced Signal Processing 4

5 Towards linear stochastic processes Wold s theorem implies that any purely non-deterministic covariance stationary process can be arbitrarily well approximated by an ARMA process Therefore, the general form for the power spectrum of a WSS process is N P x (e jω ) = α k δ(ω ω k ) + P xr (e jω ) k= We are interested in processes generated by filtering white noise with a linear shift invariant filter that has a rational system function. This class of digital filters includes the following system functions: Autoregressive (AR) all pole system Moving Average (MA) all zero system H(z) = /A(z) H(z) = B(z) Autoregressive Moving Average (ARMA) poles and zeros H(z) = B(z)/A(z) Definition: A covariance-stationary process x[n] is called (linearly) deterministic if p(x[n] x[n ], x[n 2],...] = x[n]. A stationary deterministic process, x p [n], can be predicted correctly (with zero error) using the entire past, x p [n ], x p [n 2], x[n 3],... c D. P. Mandic Advanced Signal Processing 5

6 How can we categorise real world measurements? where would you place a DC level in WGN, x[n] = A + w[n], w N (, σ 2 w ) (a) Noisy oscillations, (b) Nonlinearity and noisy oscillations, (c) Random nonlinear process (? left) Route to chaos, (? top) stochastic chaos, (? middle) mixture of sources Nonlinearity Chaos? (c)? Linearity Determinism (a) (b)?????? NARMA ARMA Stochasticity Our lecture is about ARMA models (linear stochastic) How about observing the signal through a nonlinear sensor? c D. P. Mandic Advanced Signal Processing 6

7 Wold Decomposition (Representation) Theorem Example : More into deterministic vs. stochastic behaviour Consider a stochastic process given by x[n] = A cos[n] + B sin[n] where A, B N (, σ 2 ) and A is independent of B (A and B are independent normal random variables). This process is deterministic because it can be written as x[n] = sin(2) sin() x[n ] x[n 2] that is, based on the history of x[n]. Therefore p(x[n] x[n ], x[n 2],...) = sin(2) sin() x[n ] x[n 2] = x[n] Remember: Deterministic does not mean that x[n] is non-random c D. P. Mandic Advanced Signal Processing 7

8 Imaginary Part White noise Filtered noise Example 2: Sinewave revisited, is it det. or stoch.? Is a sinewave best described as nonlinear deterministic or linear stochastic? Real Part Matlab code: z=; p=[ i, i]; [num,den]=zp2tf(z,p,); zplane(num,den); s=randn(,); s=filter(num,den,s); figure; subplot(3),plot(s), subplot(33),plot(s), subplot(32),; zplane(num,den) The AR model of a sinewave x(k)=a*x(k-)+a2*x(k-2)+w(k) a=-, a2=.98, w~n(,) c D. P. Mandic Advanced Signal Processing 8

9 Example 3: Spectra of real world data Sunspot numbers and Sample their Number power spectrum Correlation lag Signal values 5 Correlation Partial Sunspot ACF for series sunspot series Sample Correlation Number lag Power/frequency (db/rad/sample) Correlation Burg Power ACF Spectral for sunspot Density Estimate series Normalized Frequency Correlation ( π rad/sample) lag 3 on the.5 Sun. Still, the number of sunspots obeys a relatively simple model 25 Correlation.5 Recorded from about 7 onwards Partial ACF for sunspot series frequency (db/rad/sample) Burg Power Spectral Density Estimate This signal is random, as sunspots originate from the explosions of helium and is predictable, as shown later in the Lecture c D. P. Mandic Advanced Signal Processing 9 5

10 .5.5 Second-order all pole transfer functions p =.999exp(jπ/4), p 2 =.9exp(jπ/4), p 3 =.9exp(j7π/2) p 3 p 3 * Im 7π/2 π/4 p 2 p p 2 * p*.5.5 Re Magnitude Frequency rad/s p p 2 p 3 We have two conjugate complex poles, e.g. p and p, therefore H(z) = (z p )(z p ) = z 2 ( p z )( p z ) Transfer function for p = ρe jθ (ignoring z 2 in the numerator on the RHS): H(z) = ( ρe jθ z )( ρe jθ z ) = 2ρ cos(θ)z + ρ 2 z 2 for the sinewave ρ = H(z) = 2 cos(θ)z + z 2 = + a z + a 2 z 2 Indeed, a sinewave can be modelled as an autoregressive process c D. P. Mandic Advanced Signal Processing

11 How do we model a real world signal? Suppose the measured real world signal has e.g. a bandpass (any other) power spectrum We desire to describe the whole long signal with very few parameters X(ω ) bandpass spectrum. Can we model first and second statistics of real world signal by shaping up the white noise spectrum using some transfer function? 2. Does this produce the same second order properties (mean, variance, ACF, spectrum) for any white noise input? W( ω ) B(z) H(z) = X(ω ω ) A(z) P x = 2 H(ω ) P w ω flat (white) spectrum ω bandpass spectrum Can we use this linear stochastic model for prediction? ω c D. P. Mandic Advanced Signal Processing

12 Spectrum of ARMA models (look also at Recap slides) recall that two conjugate complex poles of A(z) give one peak in the spectrum ACF P SD in terms of the information available In ARMA modelling we filter white noise w[n] (so called driving input) with a causal linear shift invariant filter with the transfer function H(z), a rational system function with p poles and q zeros given by X(z) = H(z)W (z) H(z) = B q(z) A p (z) = q k= b kz k + p k= a kz k For a stable H(z), the ARMA(p,q) stochastic process x[n] will be wide sense stationary. For the driving noise power P w = σ 2 w, the power of the stochastic process x[n] is (recall P y = H(z) 2 P x = H(z)H (z)p x ) P x (z) = σw 2 B q (z)b q (z ) A p (z)a p (z ) P z (e jθ ) = σw 2 Bq (e jθ ) 2 A p (e jθ ) 2 = B q (ω) 2 σ2 w A p (ω) 2 Notice that ( ) in analogue frequency corresponds to z in digital freq. c D. P. Mandic Advanced Signal Processing 2

13 Example 4: Can the shape of power spectrum tell us about the order of the polynomials B(z) and A(z)? Plot the power spectrum of an ARMA(2,2) process for which the zeros of H(z) are z =.95e ±jπ/2 poles are at z =.9e ±j2π/5 Solution: The system function is (poles and zeros resonance & sink) H(z) = +.925z z +.8z 2 Power Spectrum [db] Frequency c D. P. Mandic Advanced Signal Processing 3

14 Difference equation representation the ACF follows the data model! Random processes x[n] and w[n] are related by a linear difference equation with constant coefficients, given by p q ARMA(p,q) x[n] = a l x[n l] + b l w[n l] H(z) = X(z) W (z) = B(z) A(z) l= }{{} autoregressive Notice that the autocorrelation function of x[n] and crosscorrelation between the stochastic process x[n] and the driving input w[n] follow the same difference equation, i.e. if we multiply both sides of the above equation by x[n k] and take the statistical expectation, we have l= }{{} moving average r xx (k) = p a l r xx (k l) + q b l r xw (k l) l= }{{} easy to calculate l= }{{} can be complicated Since x is WSS, it follows that x[n] and w[n] are jointly WSS c D. P. Mandic Advanced Signal Processing 4

15 General linear processes: Stationarity and invertibility Consider a linear stochastic process output from a linear filter, driven by WGN w[n] x[n] = w[n] + b w[n ] + b 2 w[n 2] + = w[n] + b j w[n j] that is, a weighted sum of past samples of driving white noise w[n]. For this process to be a valid stationary process, the coefficients must be absolutely summable, that is j= b j <. This also implies that under stationarity conditions, x[n] is also a weighted sum of past values of x, plus an added shock w[n], that is x[n] = a x[n ] + a 2 x[n 2] + + w[n] Linear Process is stationary if j= b j < Linear Process is invertible if j= a j < Recall that H(ω) = n= h(n)e jωn for ω = H() = n= h(n) j= c D. P. Mandic Advanced Signal Processing 5

16 Autoregressive processes (pole only) A general AR(p) process (autoregressive of order p) is given by p x[n] = a x[n ] + + a p x[n p] + w[n] = a i x[n i] + w[n] = a T x[n] + w[n] Observe the auto regression in {x[n]} the past of x is used to generate the future i= Duality between AR and MA processes: For example, the first order autoregressive process, AR(), x[n] = a x[n ] + w[n] b j w[n j] j= has an MA representation, too (right hand side above). This follows from the duality between IIR and FIR filters. c D. P. Mandic Advanced Signal Processing 6

17 Example 5: Statistical properties of AR processes Drive the AR(4) model from Example 6 with two different WGN realisations N (, ) Value of random signal Value of random signal Sample number Sample number Value of AR signal Value of AR signal Sample number Sample number ACF of AR signal ACF of AR signal Correlation lag Correlation lag PSD (db/rad/sample) PSD (db/rad/sample) Normalized Frequency ( π rad/sample) Normalized Frequency ( π rad/sample) r = wgn(248,,); a = [2.237, , 2.697, -.966]; a = [ -a]; x = filter(,a,r); xacf = xcorr(x); xpsd = abs(fftshift(fft(xacf))); The time domain random AR(4) processes look different The ACFs and PSDs are exactly the same (2nd-order stats)! This signifies the importance of our statistical approach c D. P. Mandic Advanced Signal Processing 7

18 ACF and normalised ACF of AR processes To obtain the autocorrelation function of an AR process, multiply the above equation by x[n k] to obtain (recall that r( m) = r(m)) x[n k]x[n] = a x[n k]x[n ] + a 2 x[n k]x[n 2] + +a p x[n k]x[n p] + x[n k]w[n] Notice that E{x[n k]w[n]} vanishes when k >. Therefore we have r xx () = a r xx () + a 2 r xx (2) + + a p r xx (p) + σ 2 w, k = r xx (k) = a r xx (k ) + a 2 r xx (k 2) + + a p r xx (k p), k > On dividing throughout by r xx () we obtain ρ(k) = a ρ(k ) + a 2 ρ(k 2) + + a p ρ(k p) k > Quantities ρ(k) are called normalised correlation coefficients c D. P. Mandic Advanced Signal Processing 8

19 Variance and spectrum of AR processes Variance: When k = the contribution from the term E{x[n k]w[n]} is σ 2 w, and r xx () = a r xx ( ) + a 2 r xx ( 2) + + a p r xx ( p) + σ 2 w Divide by r xx () = σ 2 x to obtain σ 2 x = σ 2 w ρ a ρ 2 a 2 ρ p a p Power spectrum: (recall that P xx = H(z) 2 P ww = H(z)H (z)p ww, the expression for the output power of a linear system see Slides 4 6) P xx (f) = 2σ 2 w a e j2πf a p e j2πpf 2 f /2 Look at Spectrum of Linear Systems from Lecture : Background c D. P. Mandic Advanced Signal Processing 9

20 Example 6a: AR(p) signal generation a = [2.237, 2.943, 2.697,.966] Generate the input signal x by filtering white noise through the AR filter Estimate the PSD of x based on a fourth-order AR model Careful! The Matlab routines require the AR coeff. a in the format a = [, a,..., a p ] Solution: randn( state,); x = filter(,a,randn(256,)); pyulear(x,4) Power/frequency (db/rad/sample) Yule-Walker Power Spectral Density Estimate 256 points 24 points Normalized Frequency ( π rad/sample) Notice the dependence on data length % AR system output % Fourth-order estimate c D. P. Mandic Advanced Signal Processing 2

21 Example 6b: Alternative AR power spectrum calculation Consider the AR(4) system given by y[n] = 2.237y[n ] 2.943y[n 2]+2.697y[n 3].966y[n 4]+w[n] a = [ ]; % AR filter coefficients freqz(,a) % AR filter frequency response title( AR System Frequency Response ) Magnitude (db) Phase (degrees) AR System Frequency Response Normalized Frequency ( π rad/sample) Normalized Frequency ( π rad/sample) c D. P. Mandic Advanced Signal Processing 2

22 Key: Finding AR coefficients the Yule Walker eqns (there are several similar forms we follow the most concise one) For k =, 2,..., p a set of equations: from the general autocorrelation function, we obtain r xx () = a r xx () + a 2 r xx () + + a p r xx (p ) r xx (2) = a r xx () + a 2 r xx () + + a p r xx (p 2). =. r xx (p) = a r xx (p ) + a 2 r xx (p 2) + + a p r xx () These equations are called the Yule Walker or normal equations. Their solution gives us the set of autoregressive parameters a = [a,..., a p ] T. The above can be expressed in a vector matrix form as r xx = R xx a a = R xx r xx The ACF matrix R xx is positive definite (Toeplitz) which guarantees matrix inversion c D. P. Mandic Advanced Signal Processing 22

23 Example 7: Find the parameters of an AR(2) process, x(n), generated by x[n] =.2x[n ].8x[n 2] + w[n] Coursework: comment on the shape of the ACF for large lags 6 AR(2) signal x=filter([],[,.2,.8],w) 2 ACF for AR(2) signal x=filter([],[,.2,.8],w) ACF for AR(2) signal x=filter([],[,.2,.8],w) AR(2) signal values 2 2 ACF of AR(2) signal 5 5 ACF of AR(2) signal Matlab: Sample number a () = [.6689] Correlation lag 2 2 Correlation lag for i=:6; [a,e]=aryule(x,i); display(a);end a (3) = [.759,.7576,.358] a (2) = [.246,.88] a (4) = [.762,.753,.456,.83] a (5) = [.763,.752,.562,.248,.4] a (6) = [.762,.758,.565,.98,.62,.67] c D. P. Mandic Advanced Signal Processing 23

24 Example 8: Advantages of model-based analysis Consider the PSD s for different realisations of the AR(4) process from Example 5 5 PSD from data PSD from model 5 PSD from data PSD from model PSD (db) PSD (db) Normalized frequency ω/ω max Normalized frequency ω/ω max The different realisations lead to different Emprical PSD s (in thin black) The theoretical PSD from the model is consistent regardless of the data (in thick red) N = 24; w = wgn(n,,); a = [2.237, , 2.697, -.966]; a = [ -a]; x = filter(,a,w); xacf = xcorr(x); dft = fft(xacf); EmpPSD = abs(dft/length(dft)).^ 2; ThePSD = abs(freqz(,a,n,)).^ 2 ; % Coefficients of AR(4) process % Autocorrelation of AR(4) process % Empirical PSD obtained from data % Theoretical PSD obtained from model c D. P. Mandic Advanced Signal Processing 24

25 Normal equations for the autocorrelation coefficients For the autocorrelation coefficients ρ k = r xx (k)/r xx () we have ρ = a + a 2 ρ + + a p ρ p ρ 2 = a ρ + a a p ρ p 2. =. ρ p = a ρ p + a 2 ρ p a p When does the sequence {ρ, ρ, ρ 2,...} vanish? Homework: Try command xcorr in Matlab c D. P. Mandic Advanced Signal Processing 25

26 Yule Walker modelling in Matlab In Matlab Power spectral density using Y W method pyulear Pxx = pyulear(x,p) [Pxx,w] = pyulear(x,p,nfft) [Pxx,f] = pyulear(x,p,nfft,fs) [Pxx,f] = pyulear(x,p,nfft,fs, range ) [Pxx,w] = pyulear(x,p,nfft, range ) Description: Pxx = pyulear(x,p) implements the Yule-Walker algorithm, and returns Pxx, an estimate of the power spectral density (PSD) of the vector x. To remember for later This estimate is also an estimate of the maximum entropy. See also aryule, lpc, pburg, pcov, peig, periodogram c D. P. Mandic Advanced Signal Processing 26

27 From data to an AR(p) model So far, we assumed the model (AR, MA, or ARMA) and analysed the ACF and PSD based on known model coefficients. In practice: DATA MODEL This procedure is as follows: * record data x(k) * find the autocorrelation of the data ACF(x) * divide by r_xx() to obtain correlation coefficients \rho(k) * write down Yule-Walker equations * solve for the vector of AR parameters The problem is that we do not know the model order p beforehand. An illustration of the importance of the correct morel order is given in the following example. c D. P. Mandic Advanced Signal Processing 27

28 Example 9: Sunspot number estimation consistent with the properties of a second order AR process 2 Sunspots ACF Zoomed ACF Time [years] Delay [years].4 Delay [years] a = [.9295] a 2 = [.474,.5857] a 3 = [.5492,.775,.284] a 4 = [.567,.5788,.2638,.2532] a 5 = [.4773,.5377,.739,.74,.555] a 6 = [.4373,.5422,.29,.558,.2248,.2574] The sunspots model is x[n] =.474 x[n ].5857 x[n 2] + w[n] c D. P. Mandic Advanced Signal Processing 28

29 Special case #: AR() process (Markov) For Markov processes, instead of the iid condition, we have the first order conditional expectation, that is p(x[n], x[n ], x[n 2],..., x[]) = p(x[n] x[n ]) Then x[n] = a x[n ] + w[n] = w[n] + a w[n ] + a 2 w[n 2] + Therefore: first order dependence, order- memory i) for the AR() process to be stationary < a <. ii) Autocorrelation Function of AR(): From the Yule-Walker equations r xx (k) = a r xx (k ), k > In terms of the correlation coefficients, ρ(k) = r(k)/r(), with ρ = ρ k = a k, k > Notice the difference in the behaviour of the ACF for a positive and negative c D. P. Mandic Advanced Signal Processing 29

30 Variance and power spectrum of AR() process Both can be calculated directly from the general expression for the variance and spectrum of AR(p) processes. Variance: Also from a general expression for the variance of linear processes from Lecture σ 2 x = σ2 w ρ a = σ2 w a 2 Power spectrum: Notice how the flat PSD of WGN is shaped according to the position of the pole of AR() model (LP or HP) P xx (f) = 2σ 2 w a e j2πf 2 = 2σ 2 w + a 2 2a cos(2πf) c D. P. Mandic Advanced Signal Processing 3

31 Example : ACF and spectrum of AR() for a = ±.8 a < High Pass a > Low Pass Signal Values Correlation 5 x[n] =.8*x[n ] + w[n] Sample Number 8 ACF Correlation Lag Burg Power Spectral Density Estimate 2 Signal Values Correlation 5 x[n] =.8*x[n ] + w[n] Sample Number 8 ACF Correlation Lag Burg Power Spectral Density Estimate 2 Power, db Power, db Normalised Frequency, π rad/sample Normalised Frequency, π rad/sample c D. P. Mandic Advanced Signal Processing 3

32 Special case #2: Second order autoregressive processes, p = 2, q =, hence the notation AR(2) The input output functional relationship is given by (w[n] any white noise) x[n] = a x[n ] + a 2 x[n 2] + w[n] X(z) = ( a z + a 2 z 2) X(z) + W (z) H(z) = X(z) W (z) = a z a 2 z 2 H(ω) = H(e jω ) = Y-W equations for p=2 ρ = a + a 2 ρ ρ 2 = a ρ + a 2 when solved for a and a 2, we have a = ρ ( ρ 2 ) ρ 2 a 2 = ρ 2 ρ 2 ρ 2 a e jω a 2 e 2jω Connecting a s and ρ s ρ = a a 2 ρ 2 = a 2 + a2 a 2 Since ρ < ρ() = a stability condition on a and a 2 c D. P. Mandic Advanced Signal Processing 32

33 Variance and power spectrum Both readily obtained from the general AR(2) process equation! Variance σ 2 x = σ 2 w ρ a ρ 2 a 2 = Power spectrum 2σw 2 P xx (f) = a e j2πf a 2 e j4πf 2 ( ) a2 + a 2 σ 2 w ( a 2 ) 2 a 2 = 2σ 2 w + a 2 + a2 2 2a ( a 2 cos(2πf) 2a 2 cos(4πf)), f /2 Stability conditions (Condition can be obtained from the denominator of variance, Condition 2 from the expression for ρ, etc.) Condition : a + a 2 < Condition 2 : a 2 a < Condition 3 : < a 2 < This can be visualised within the so called stability triangle c D. P. Mandic Advanced Signal Processing 33

34 Stability triangle a 2 ACF ACF II I m Real Roots m 2 2 a ACF ACF III m Complex Roots m IV i) Real roots Region : Monotonically decaying ACF ii) Real roots Region 2: Decaying oscillating ACF iii) Complex roots Region 3: Oscillating pseudo-periodic ACF iv) Complex roots Region 4: Pseudo-periodic ACF c D. P. Mandic Advanced Signal Processing 34

35 Example : Stability triangle and ACFs of AR(2) signals Left: a = [.7,.2] (region 2) Right: a = [.474,.586] (region 4) 6 x[n] =.7*x[n ] +.2*x[n 2] + w[n] 2 Sunspot series Signal Values Signal Values Sample Number 2 3 Sample Number ACF for AR(2) signal ACF for sunspot series Correlation.5.5 Correlation Correlation Lag Correlation Lag c D. P. Mandic Advanced Signal Processing 35

36 Determining regions in the stability triangle let us examine the autocorrelation function of AR(2) processes The ACF ρ k = a ρ k + a 2 ρ k 2 k > Real roots: (a 2 + 4a 2 > ) ACF = mixture of damped exponentials Complex roots: (a 2 + 4a 2 < ) ACF exhibits a pseudo periodic behaviour ρ k = Dk sin(2πf k + Φ) sin Φ D - damping factor, of a sinewave with frequency f and phase Φ D = a 2 cos(2πf ) = a 2 a 2 tan(φ) = + D2 D 2 tan(2πf ) c D. P. Mandic Advanced Signal Processing 36

37 Example 2: AR(2) where a >, a 2 < Region 4 Consider: x[n] =.75x[n ].5x[n 2] + w[n] 2 x[n] =.75*x[n 2].5*x[n ] + w[n] Signal values Correlation Power/frequency (db/rad/sample) Sample Number ACF Correlation lag Burg Power Spectral Density Estimate Normalized Frequency ( π rad/sample) The damping factor D =.5 =.7, Frequency f = cos (.533) 2π = 6.2 The fundamental period of the autocorrelation function is therefore T = 6.2. c D. P. Mandic Advanced Signal Processing 37

38 Model order selection: Partial autocorrelation function Consider an earlier example using a slightly different notation for AR coefficients 6 AR(2) signal x=filter([],[,.2,.8],w) 2 ACF for AR(2) signal x=filter([],[,.2,.8],w) ACF for AR(2) signal x=filter([],[,.2,.8],w) AR(2) signal values 2 2 ACF of AR(2) signal 5 5 ACF of AR(2) signal Sample number Correlation lag 2 2 Correlation lag To find p, first re-write AR coeffs. of order p as [a_p,...,a_pp] p = [.6689] = a p = 2 [.246,.88] = [a 2, a 22 ] p = 3 [.759,.7576,.358] = [a 3, a 32, a 33 ] p = 4 [.762,.753,.456,.83] = [a 4, a 42, a 43, a 44 ] p = 5 [.763,.752,.562,.248,.4] = [a 5,..., a 55 ] p = 6 [.762,.758,.565,.98,.62,.67] c D. P. Mandic Advanced Signal Processing 38

39 Partial autocorrelation function: Motivation Notice: ACF of AR(p) infinite in duration, but can by be described in terms of p nonzero functions ACFs. Denote by a kj the jth coefficient in an autoregressive representation of order k, so that a kk is the last coefficient. Then ρ j = a kj ρ j + + a k(k ) ρ j k+ + a kk ρ j k j =, 2,..., k leading to the Yule Walker equation, which can be written as ρ ρ 2 ρ k ρ ρ ρ k ρ k ρ k 2 ρ k 3 a k a k2. a kk = ρ ρ 2. ρ k The only difference from the standard Y-W equtions is the use of the symbols a ki to denote the AR coefficient a i indicating the model order. c D. P. Mandic Advanced Signal Processing 39

40 Finding partial ACF coefficients Solving these equations for k =, 2,... successively, we obtain ρ ρ a = ρ, a 22 = ρ ρ ρ 2 2 ρ 2 ρ 2 ρ ρ 3 ρ 2, a 33 = ρ ρ 2, etc ρ ρ ρ 2 ρ The quantity a kk, regarded as a function of the model order k, is called the partial autocorrelation function. For an AR(p) process, the PAC a kk will be nonzero for k p and zero for k > p indicates the order of an AR(p) process. In practice, we introduce a small threshold, as for real world data it is difficult to guarantee that a kk = for k > p. (see your coursework) c D. P. Mandic Advanced Signal Processing 4

41 Example 3: Work by Yule model of sunspot numbers Recorded for > 3 years. To study them in 927 Yule invented the AR(2) model Signal values Correlation 5 5 Sunspot series Sample Number ACF for sunspot series Correlation lag We first center the data, as we do not wish to model the DC offset (determinisic component), but the stochastic component (AR model driven by white noise)! Using the Y-W equations we obtain: a = [.9295] a 2 = [.474,.5857] a 3 = [.5492,.775,.284] a 4 =[.567,-.5788,-.2638,.2532] a 5 =[.4773,-.5377,-.739,.74,.555] a 6 =[.4373, , -.29,.558, ,.2574] c D. P. Mandic Advanced Signal Processing 4

42 Example 3 (contd.): Model order for sunspot numbers After k = 2 the partial correlation function (PAC) is very small, indicating p = 2 5 Sunspot series ACF for sunspot series Signal values 5 Correlation Sample Number Correlation lag Correlation.5.5 Partial ACF for sunspot series Correlation lag Power/frequency (db/rad/sample) Burg Power Spectral Density Estimate Normalized Frequency ( π rad/sample) The broken red lines denote the 95% confidence interval which has the value ±.96/ N, and where P AC c D. P. Mandic Advanced Signal Processing 42

43 Example 4: Model order for an AR(3) process An AR(3) process realisation, its ACF, and partial autocorrelation (PAC) AR(3) signal ACF for AR(3) signal 5.8 Signal values 5 Correlation Correlation Sample Number Partial ACF for AR(3) signal Correlation lag Power/frequency (db/rad/sample) Correlation lag Burg Power Spectral Density Estimate Normalized Frequency ( π rad/sample) After lag k = 3, the PAC becomes very small (broken line conf. int.) c D. P. Mandic Advanced Signal Processing 43

44 Example 4: Model order selection for a financial time series (the correct and time-reversed time series) Rate [$/ ] AutoCorr value AutoConv value /$ Exchange Rate Date Autocorrelation Function /$ Exhange Rate Lags 4 Autoconvolution Function /$ Exhange Rate Lags 4 Rate [$/ ] AutoCorr value AutoConv value Time-Reversed /$ Exchange Rate Date Autocorrelation Function Time-Reversed /$ Exhange Rate Lags 4 Autoconvolution Function Time-Reversed /$ Exhange Rate Lags 4 Partial correlations: AR(): a = [.9994] AR(2): a = [.9994,.354] AR(3): a = [.9994,.354,.24] AR(4): a = [.9994,.354,.24,.29] AR(5): a = [.9994,.354,.24,.29,.29] AR(6): a = [.9994,.354,.24,.29,.29,.72] c D. P. Mandic Advanced Signal Processing 44

45 AR model based prediction: Importance of model order For a zero mean process x[n], the best linear predictor in the mean square error sense of x[n] based on x[n ], x[n 2],... is ˆx[n] = a k, x[n ] + a k,2 x[n 2] + + a k,k x[n k + ] (apply the E{ } operator to the general AR(p) model expression, and recall that E{w[n]} = ) (Hint: E{x[n]} = ˆx[n] = E {a k, x[n ] + + a k,k x[n k + ] + w[n]} = a k, x[n ] + + a k,k x[n k + ]) ) whether the process is an AR or not In MATLAB, check the function: ARYULE and functions PYULEAR, ARMCOV, ARBURG, ARCOV, LPC, PRONY c D. P. Mandic Advanced Signal Processing 45

46 Example 6: Under vs Over fitting a model Estimation of the parameters of an AR(2) process Original AR(2) process x[n] =.2x[n ].9x[n 2] + w[n], w[n] N (, ), estimated using AR(), AR(2) and AR(2) models: 5 Original and estimated signals.2 AR coefficients Time [sample] Original AR(2) signal AR(), Error= AR(2), Error=.42 AR(2), Error= Coefficient index The higher order coefficients of the AR(2) model are close to zero and therefore do not contribute significantly to the estimate, while the AR() does not have sufficient degrees of freedom. (see also Appendix 3) c D. P. Mandic Advanced Signal Processing 46

47 Effects of over-modelling on autoregressive spectra: Spectral line splitting Consider an AR(2) signal x(n) =.9x(n 2) + w(n) with w N (, ). N = 64 data samples, model orders p = 4 (solid blue) and p = 2 (broken green). AR 2 Highpass Circularity.m Magnitude (db) Frequency (units of π) Notice that this is an AR(2) model! Although the true spectrum has a single spectral peak at ω π/2 (blue), when over-modelling using p = 2 this peak is split into two peaks (green). c D. P. Mandic Advanced Signal Processing 47

48 Model order selection practical issues In practice: the greater the model order the greater accuracy & complexity Q: When do we stop? What is the optimal model order? Solution: To establish a trade off between computational complexity and model accuracy, we introduce penalty for a high model order. Such criteria for model order selection are: MDL: The minimum description length criterion (MDL) (by Rissanen), AIC: The Akaike information criterion (AIC) MDL p opt = min p AIC p opt = min p [ log(e) + p log(n) ] N [log(e) + 2p/N] E the loss function (typically cumulative squared error), p the number of estimated parameters (model order), N the number of available data points. c D. P. Mandic Advanced Signal Processing 48

49 Example 7: Model order selection MDL vs AIC MDL and AIC criteria for an AR(2) model with a =.5 a 2 = MDL for AR(2) MDL Cumulative Squared Error AIC for AR(2) model order p AIC Cumulative Squared Error The graphs on the left show the (prediction error) 2 (vertical axis) versus the model order p (horizontal axis). Notice that p opt = 2. The curves are convex, i.e. a monotonically decreasing error 2 with an increasing penalty term (MDL or AIC correction). Hence, we have a unique minimum at p = 2, reflecting the correct model order (no overmodelling) c D. P. Mandic Advanced Signal Processing 49

50 Moving average processes, MA(q) A general MA(q) process is given by x[n] = w[n] + b w[n ] + + b q w[n q] Autocorrelation function: The autocovariance function of MA(q) c k = E[(w[n] + b w[n ] + + b q w[n q])(w[n k] +b w[n k ] + + b q w[n k q])] Hence the variance of the process c = ( + b b 2 q)σw 2 The ACF of an MA process has a cutoff after lag q. Spectrum: All zero transfer function struggles to model peaky PSDs P (f) = 2σw 2 + b e j2πf + b 2 e j4πf + + b q e j2πqf 2 c D. P. Mandic Advanced Signal Processing 5

51 Example 8: Third order moving average MA(3) process An MA(3) process and its autocorrel. (ACF) and partial autocorrel. (PAC) fns. 3 MA(3) signal.2 ACF for MA(3) signal Signal values Sample Number Correlation Correlation lag Correlation Partial ACF for MA(3) signal Correlation lag Power/frequency (db/rad/sample) Burg Power Spectral Density Estimate Normalized Frequency ( π rad/sample) After lag k = 3, the PAC becomes very small (broken line conf. int.) c D. P. Mandic Advanced Signal Processing 5

52 Analysis of nonstationary signals Signal values Correlation MDL.5 Speech Signal Sample Number Partial ACF for W Partial ACF for W2 Partial ACF for W Correlation lag MDL calculated for W W Calculated Model Order = Model Order Correlation MDL Correlation lag MDL calculated for W2.5 W2 Calculated Model Order > Model Order Correlation MDL Correlation lag MDL calculated for W W3 Calculated Model Order = Model Order Consider a real world speech signal, and thee different segments with different statistical properties Different AR model orders required for different segments of speech opportunity for content analysis! To deal with nonstationarity we need short sliding data windows c D. P. Mandic Advanced Signal Processing 52

53 Summary: AR and MA Processes i) A stationary finite AR(p) process can be represented as an infinite order MA process. A finite MA process can be represented as an infinite AR process. ii) The finite MA(q) process has an ACF that is zero beyond q. For an AR process, the ACF is infinite in length and consists of mixture of damped exponentials and/or damped sine waves. iii) Finite MA process are always stable, and there is no requirement on the coefficients of MA processes for stationarity. However, for invertibility, the roots of the characteristic equation must lie inside the unit circle. iv) AR processes produce spectra with sharp peaks (two poles of A(z) per peak), whereas MA processes cannot produce peaky spectra. ARMA modelling is a classic technique which has found a tremendous number of applications c D. P. Mandic Advanced Signal Processing 53

54 Summary: Wold s Decomposition Theorem and ARMA Every stationary time series can be represented as a sum of a perfectly predictable process and a feasible moving average process Two time series with the same Wold representations are the same, as the Wold representation is unique Since any MA process also has an ARMA representation, working with ARMA models is not an arbitrary choice but is physically justified The causality and stationarity on ARMA processes depend entirely on the AR parameters and not on the MA parameters An MA process is not uniquely determined by its ACF An AR(p) process is always invertible, even if it is not stationary An MA(q) process is always stationary, even if it is non-invertible c D. P. Mandic Advanced Signal Processing 54

55 Recap: Linear systems known input X(z) x(k) transfer H(z) h(k) function known/unknown unknown/known output Y(z) y(k) Y(z) H(z) = X(z) Described by their impulse response h(n) or the transfer function H(z) In the frequency domain (remember that z = e jθ ) the transfer function is H(θ) = h(n)e jnθ {x[n]} {h(n)} H(θ) n= {y[n]} that is y[n] = h(r)x[n r] = h x r= The next two slides show how to calculate the power of the output, y(k). c D. P. Mandic Advanced Signal Processing 55

56 Recap: Linear systems statistical properties mean and variance i) Mean E{y[n]} = E { r= µ y = µ x h(r)x[n r] r= } = h(r) = µ x H() r= h(r)e{x[n r]} [ NB: H(θ) = r= h(r)e jrθ. For θ =, then H() = r= h(r) ] ii) Cross correlation r yx (m) = E{y[n]x[n + m]} = = r= h(r)r xx (m + r) r= h(r)e{x[n r]x[n + m]} convolution of input ACF and {h} Cross-power spectrum S yx (f) = F(r yx ) = S xx (f)h(f) c D. P. Mandic Advanced Signal Processing 56

57 Recap: Lin. systems statistical properties output These are key properties used in AR spectrum estimation From r xy (m) = r yx ( m) we have r xy (m) = r= h(r)r xx(m r). Now we write r yy (m) = E{y[n]y[n + m]} = h(r)e{x[n r]y[n + m]} = r= r= h(r)r xy (m + r) = by taking Fourier transforms we have S xy (f) = S xx (f)h(f) r= h( r)r xy (m r) S yy (f) = S xy (f)h( f) function of r xx or S yy (f) = H(f)H( f)s xx (f) = H(f) 2 S xx (f) Output power spectrum = input power spectrum squared transfer function c D. P. Mandic Advanced Signal Processing 57

58 Appendix : More on numbers (recorded since 874) Top: original sunspots Middle and Bottom: AR(2) representations Left: time Middle:spectrum Right: autocorrelation Average number of sunspots 2 NASA st filtered Gaussian process (2nd order YW) nd filtered Gaussian process (2nd order YW) Burg Power Spectral Density Estimation PSD for st filtered signal PSD for 2nd filtered signal Partial ACF for sunspot series Partial ACF for st filtered signal Partial ACF for 2nd filtered signal Top: original Middle: first AR(2) model Bottom: second AR(2) model c D. P. Mandic Advanced Signal Processing 58

59 Appendix 2: Model order for an AR(2) process An AR(2) signal, its ACF, and its partial autocorrelations (PAC) 8 AR(2) signal ACF for AR(2) signal 6 Signal values Correlation Correlation Sample Number Partial ACF for AR(2) signal Correlation lag Power/frequency (db/rad/sample) Correlation lag Burg Power Spectral Density Estimate Normalized Frequency ( π rad/sample) After lag k = 2, the PAC becomes very small (broken line conf. int.) c D. P. Mandic Advanced Signal Processing 59

60 Appendix 3: More on over parametrisation Consider the linear stochastic process given by x[n] = x[n ].6x[n 2] + w[n].8w[n ] It clearly has an ARMA(2,) form. Consider now its coefficient vectors written as polynomials in the z domain: a(z) = z +.6z 2 = (.8z)(.2z) b(z) =.8z These polynomials clearly have a common factor (.8z), and therefore after cancelling these terms, we have the resulting lower order polynomials: a(z) =.2z b(z) = The above process is therefore an AR() process, given by x[n] =.2x[n ] + w[n] and the original ARMA(2,) version was over parametrised. c D. P. Mandic Advanced Signal Processing 6

61 Something to think about... What would be the properties of a multivariate (MV) autoregressive, say MVAR(), process, where the quantities w, x, a now become matrices, that is X(n) = AX(n ) + W(n) Would the inverse of the multichannel correlation matrix depend on how similar the data channels are? Explain also in terms of eigenvalues and collinearity. Threshold autoregressive (TAR) models allow for the mean of a time series to change along the blocks of data. What would be the advantages of such a model? Try to express an AR(p) process as a state-space model. What kind of the transition matrix between the states do you observe? c D. P. Mandic Advanced Signal Processing 6

62 Consider also: Fourier transform as a filtering operation We can see FT as a convolution of a complex exponential and the data (under a mild assumption of a one-sided h sequence, ranging from to ) ) Continuous FT. For a continuous FT F (ω) = x(t)e jωt dt Let us now swap variables t τ and multiply by e jωt, to give e jωt x(τ)e jωτ dτ = x(τ) } e jω(t τ) {{} dτ = x(t) e jωt h(t τ) (= x(t) h(t)) 2) Discrete Fourier transform. For DFT, we have a filtering operation N [ X(k) = x(n)e j2π N nk = x() + W x() + W [ ] x(2) + W = e j2π N n n= }{{} cumulative add and multiply with the transfer function (large N) H(z) = z W = x(t) exp(jwt) x[n] x(t)*exp(jwt) x + DFT continuous time case z W 2 cos θ k z +z 2 discrete time case z W DFT c D. P. Mandic Advanced Signal Processing 62

63 Notes c D. P. Mandic Advanced Signal Processing 63

64 Notes c D. P. Mandic Advanced Signal Processing 64

65 Notes c D. P. Mandic Advanced Signal Processing 65

66 Notes c D. P. Mandic Advanced Signal Processing 66

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing Linear Stochastic Models Special Types of Random Processes: AR, MA, and ARMA Digital Signal Processing Department of Electrical and Electronic Engineering, Imperial College d.mandic@imperial.ac.uk c Danilo

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides

More information

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied Linear Signal Models Overview Introduction Linear nonparametric vs. parametric models Equivalent representations Spectral flatness measure PZ vs. ARMA models Wold decomposition Introduction Many researchers

More information

Lecture 4 - Spectral Estimation

Lecture 4 - Spectral Estimation Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated

More information

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, 2007 Cover Sheet Test Duration: 120 minutes. Open Book but Closed Notes. Calculators allowed!! This test contains five problems. Each of

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Practical Spectral Estimation

Practical Spectral Estimation Digital Signal Processing/F.G. Meyer Lecture 4 Copyright 2015 François G. Meyer. All Rights Reserved. Practical Spectral Estimation 1 Introduction The goal of spectral estimation is to estimate how the

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

Part III Spectrum Estimation

Part III Spectrum Estimation ECE79-4 Part III Part III Spectrum Estimation 3. Parametric Methods for Spectral Estimation Electrical & Computer Engineering North Carolina State University Acnowledgment: ECE79-4 slides were adapted

More information

Lecture 4: FT Pairs, Random Signals and z-transform

Lecture 4: FT Pairs, Random Signals and z-transform EE518 Digital Signal Processing University of Washington Autumn 2001 Dept. of Electrical Engineering Lecture 4: T Pairs, Rom Signals z-transform Wed., Oct. 10, 2001 Prof: J. Bilmes

More information

Review of Discrete-Time System

Review of Discrete-Time System Review of Discrete-Time System Electrical & Computer Engineering University of Maryland, College Park Acknowledgment: ENEE630 slides were based on class notes developed by Profs. K.J. Ray Liu and Min Wu.

More information

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Random Signals Analysis (MVE136) Mats Viberg Department of Signals and Systems Chalmers University of Technology 412

More information

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 13 CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME When characterizing or modeling a random variable, estimates

More information

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet EE538 Final Exam Fall 005 3:0 pm -5:0 pm PHYS 3 Dec. 17, 005 Cover Sheet Test Duration: 10 minutes. Open Book but Closed Notes. Calculators ARE allowed!! This test contains five problems. Each of the five

More information

Discrete-Time David Johns and Ken Martin University of Toronto

Discrete-Time David Johns and Ken Martin University of Toronto Discrete-Time David Johns and Ken Martin University of Toronto (johns@eecg.toronto.edu) (martin@eecg.toronto.edu) University of Toronto 1 of 40 Overview of Some Signal Spectra x c () t st () x s () t xn

More information

Covariances of ARMA Processes

Covariances of ARMA Processes Statistics 910, #10 1 Overview Covariances of ARMA Processes 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Very useful for designing and analyzing signal processing systems

Very useful for designing and analyzing signal processing systems z-transform z-transform The z-transform generalizes the Discrete-Time Fourier Transform (DTFT) for analyzing infinite-length signals and systems Very useful for designing and analyzing signal processing

More information

Final Exam January 31, Solutions

Final Exam January 31, Solutions Final Exam January 31, 014 Signals & Systems (151-0575-01) Prof. R. D Andrea & P. Reist Solutions Exam Duration: Number of Problems: Total Points: Permitted aids: Important: 150 minutes 7 problems 50 points

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Discrete Time Systems

Discrete Time Systems Discrete Time Systems Valentina Hubeika, Jan Černocký DCGM FIT BUT Brno, {ihubeika,cernocky}@fit.vutbr.cz 1 LTI systems In this course, we work only with linear and time-invariant systems. We talked about

More information

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M: Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

# FIR. [ ] = b k. # [ ]x[ n " k] [ ] = h k. x[ n] = Ae j" e j# ˆ n Complex exponential input. [ ]Ae j" e j ˆ. ˆ )Ae j# e j ˆ. y n. y n.

# FIR. [ ] = b k. # [ ]x[ n  k] [ ] = h k. x[ n] = Ae j e j# ˆ n Complex exponential input. [ ]Ae j e j ˆ. ˆ )Ae j# e j ˆ. y n. y n. [ ] = h k M [ ] = b k x[ n " k] FIR k= M [ ]x[ n " k] convolution k= x[ n] = Ae j" e j ˆ n Complex exponential input [ ] = h k M % k= [ ]Ae j" e j ˆ % M = ' h[ k]e " j ˆ & k= k = H (" ˆ )Ae j e j ˆ ( )

More information

Use: Analysis of systems, simple convolution, shorthand for e jw, stability. Motivation easier to write. Or X(z) = Z {x(n)}

Use: Analysis of systems, simple convolution, shorthand for e jw, stability. Motivation easier to write. Or X(z) = Z {x(n)} 1 VI. Z Transform Ch 24 Use: Analysis of systems, simple convolution, shorthand for e jw, stability. A. Definition: X(z) = x(n) z z - transforms Motivation easier to write Or Note if X(z) = Z {x(n)} z

More information

Lecture 19 IIR Filters

Lecture 19 IIR Filters Lecture 19 IIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/5/10 1 General IIR Difference Equation IIR system: infinite-impulse response system The most general class

More information

ECE503: Digital Signal Processing Lecture 5

ECE503: Digital Signal Processing Lecture 5 ECE53: Digital Signal Processing Lecture 5 D. Richard Brown III WPI 3-February-22 WPI D. Richard Brown III 3-February-22 / 32 Lecture 5 Topics. Magnitude and phase characterization of transfer functions

More information

Basics on 2-D 2 D Random Signal

Basics on 2-D 2 D Random Signal Basics on -D D Random Signal Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Time: Fourier Analysis for -D signals Image enhancement via spatial filtering

More information

EE 224 Signals and Systems I Review 1/10

EE 224 Signals and Systems I Review 1/10 EE 224 Signals and Systems I Review 1/10 Class Contents Signals and Systems Continuous-Time and Discrete-Time Time-Domain and Frequency Domain (all these dimensions are tightly coupled) SIGNALS SYSTEMS

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Solutions. Number of Problems: 10

Solutions. Number of Problems: 10 Final Exam February 9th, 2 Signals & Systems (5-575-) Prof. R. D Andrea Solutions Exam Duration: 5 minutes Number of Problems: Permitted aids: One double-sided A4 sheet. Questions can be answered in English

More information

Digital Signal Processing

Digital Signal Processing COMP ENG 4TL4: Digital Signal Processing Notes for Lecture #21 Friday, October 24, 2003 Types of causal FIR (generalized) linear-phase filters: Type I: Symmetric impulse response: with order M an even

More information

16.584: Random (Stochastic) Processes

16.584: Random (Stochastic) Processes 1 16.584: Random (Stochastic) Processes X(t): X : RV : Continuous function of the independent variable t (time, space etc.) Random process : Collection of X(t, ζ) : Indexed on another independent variable

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

Introduction to Biomedical Engineering

Introduction to Biomedical Engineering Introduction to Biomedical Engineering Biosignal processing Kung-Bin Sung 6/11/2007 1 Outline Chapter 10: Biosignal processing Characteristics of biosignals Frequency domain representation and analysis

More information

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive

More information

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU)

Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU) Advanced Signal Processing Minimum Variance Unbiased Estimation (MVU) Danilo Mandic room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models Module 3 Descriptive Time Series Statistics and Introduction to Time Series Models Class notes for Statistics 451: Applied Time Series Iowa State University Copyright 2015 W Q Meeker November 11, 2015

More information

Laboratory Project 2: Spectral Analysis and Optimal Filtering

Laboratory Project 2: Spectral Analysis and Optimal Filtering Laboratory Project 2: Spectral Analysis and Optimal Filtering Random signals analysis (MVE136) Mats Viberg and Lennart Svensson Department of Signals and Systems Chalmers University of Technology 412 96

More information

CMPT 889: Lecture 5 Filters

CMPT 889: Lecture 5 Filters CMPT 889: Lecture 5 Filters Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University October 7, 2009 1 Digital Filters Any medium through which a signal passes may be regarded

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Signals and Systems. Problem Set: The z-transform and DT Fourier Transform

Signals and Systems. Problem Set: The z-transform and DT Fourier Transform Signals and Systems Problem Set: The z-transform and DT Fourier Transform Updated: October 9, 7 Problem Set Problem - Transfer functions in MATLAB A discrete-time, causal LTI system is described by the

More information

Digital Filters. Linearity and Time Invariance. Linear Time-Invariant (LTI) Filters: CMPT 889: Lecture 5 Filters

Digital Filters. Linearity and Time Invariance. Linear Time-Invariant (LTI) Filters: CMPT 889: Lecture 5 Filters Digital Filters CMPT 889: Lecture 5 Filters Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University October 7, 29 Any medium through which a signal passes may be regarded as

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Random Processes With Applications (MVE 135) Mats Viberg Department of Signals and Systems Chalmers University of Technology

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model 5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n

More information

ELEG 305: Digital Signal Processing

ELEG 305: Digital Signal Processing ELEG 305: Digital Signal Processing Lecture : Design of Digital IIR Filters (Part I) Kenneth E. Barner Department of Electrical and Computer Engineering University of Delaware Fall 008 K. E. Barner (Univ.

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2: EEM 409 Random Signals Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Consider a random process of the form = + Problem 2: X(t) = b cos(2π t + ), where b is a constant,

More information

Quantitative Finance I

Quantitative Finance I Quantitative Finance I Linear AR and MA Models (Lecture 4) Winter Semester 01/013 by Lukas Vacha * If viewed in.pdf format - for full functionality use Mathematica 7 (or higher) notebook (.nb) version

More information

Moving Average (MA) representations

Moving Average (MA) representations Moving Average (MA) representations The moving average representation of order M has the following form v[k] = MX c n e[k n]+e[k] (16) n=1 whose transfer function operator form is MX v[k] =H(q 1 )e[k],

More information

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC)

More information

Linear models. Chapter Overview. Linear process: A process {X n } is a linear process if it has the representation.

Linear models. Chapter Overview. Linear process: A process {X n } is a linear process if it has the representation. Chapter 2 Linear models 2.1 Overview Linear process: A process {X n } is a linear process if it has the representation X n = b j ɛ n j j=0 for all n, where ɛ n N(0, σ 2 ) (Gaussian distributed with zero

More information

Introduction to Signal Processing

Introduction to Signal Processing to Signal Processing Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Intelligent Systems for Pattern Recognition Signals = Time series Definitions Motivations A sequence

More information

Cast of Characters. Some Symbols, Functions, and Variables Used in the Book

Cast of Characters. Some Symbols, Functions, and Variables Used in the Book Page 1 of 6 Cast of Characters Some s, Functions, and Variables Used in the Book Digital Signal Processing and the Microcontroller by Dale Grover and John R. Deller ISBN 0-13-081348-6 Prentice Hall, 1998

More information

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution ECE 54 Stochastic Signals and Systems Problem Set Solution Problem Solutions : Yates and Goodman,..4..7.3.3.4.3.8.3 and.8.0 Problem..4 Solution Since E[Y (t] R Y (0, we use Theorem.(a to evaluate R Y (τ

More information

Digital Signal Processing Lecture 10 - Discrete Fourier Transform

Digital Signal Processing Lecture 10 - Discrete Fourier Transform Digital Signal Processing - Discrete Fourier Transform Electrical Engineering and Computer Science University of Tennessee, Knoxville November 12, 2015 Overview 1 2 3 4 Review - 1 Introduction Discrete-time

More information

Discrete Time Systems

Discrete Time Systems 1 Discrete Time Systems {x[0], x[1], x[2], } H {y[0], y[1], y[2], } Example: y[n] = 2x[n] + 3x[n-1] + 4x[n-2] 2 FIR and IIR Systems FIR: Finite Impulse Response -- non-recursive y[n] = 2x[n] + 3x[n-1]

More information

Computer Engineering 4TL4: Digital Signal Processing

Computer Engineering 4TL4: Digital Signal Processing Computer Engineering 4TL4: Digital Signal Processing Day Class Instructor: Dr. I. C. BRUCE Duration of Examination: 3 Hours McMaster University Final Examination December, 2003 This examination paper includes

More information

ELC 4351: Digital Signal Processing

ELC 4351: Digital Signal Processing ELC 4351: Digital Signal Processing Liang Dong Electrical and Computer Engineering Baylor University liang dong@baylor.edu October 18, 2016 Liang Dong (Baylor University) Frequency-domain Analysis of LTI

More information

EEL3135: Homework #4

EEL3135: Homework #4 EEL335: Homework #4 Problem : For each of the systems below, determine whether or not the system is () linear, () time-invariant, and (3) causal: (a) (b) (c) xn [ ] cos( 04πn) (d) xn [ ] xn [ ] xn [ 5]

More information

Solutions. Number of Problems: 10

Solutions. Number of Problems: 10 Final Exam February 4th, 01 Signals & Systems (151-0575-01) Prof. R. D Andrea Solutions Exam Duration: 150 minutes Number of Problems: 10 Permitted aids: One double-sided A4 sheet. Questions can be answered

More information

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE 53 Handout #46 Prof. Young-Han Kim Thursday, June 5, 04 Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei). Discrete-time Wiener process. Let Z n, n 0 be a discrete time white

More information

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before.

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before. ACF and PACF of an AR(p) We will only present the general ideas on how to obtain the ACF and PACF of an AR(p) model since the details follow closely the AR(1) and AR(2) cases presented before. Recall that

More information

Econometrics II Heij et al. Chapter 7.1

Econometrics II Heij et al. Chapter 7.1 Chapter 7.1 p. 1/2 Econometrics II Heij et al. Chapter 7.1 Linear Time Series Models for Stationary data Marius Ooms Tinbergen Institute Amsterdam Chapter 7.1 p. 2/2 Program Introduction Modelling philosophy

More information

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007 MIT OpenCourseWare http://ocw.mit.edu HST.58J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design Marcel Dettling Institute for Data Analysis and Process Design Zurich University of Applied Sciences marcel.dettling@zhaw.ch http://stat.ethz.ch/~dettling ETH Zürich, March 18, 2013 1 Basics of Modeling

More information

3 Theory of stationary random processes

3 Theory of stationary random processes 3 Theory of stationary random processes 3.1 Linear filters and the General linear process A filter is a transformation of one random sequence {U t } into another, {Y t }. A linear filter is a transformation

More information

Stochastic Processes. A stochastic process is a function of two variables:

Stochastic Processes. A stochastic process is a function of two variables: Stochastic Processes Stochastic: from Greek stochastikos, proceeding by guesswork, literally, skillful in aiming. A stochastic process is simply a collection of random variables labelled by some parameter:

More information

Lecture 04: Discrete Frequency Domain Analysis (z-transform)

Lecture 04: Discrete Frequency Domain Analysis (z-transform) Lecture 04: Discrete Frequency Domain Analysis (z-transform) John Chiverton School of Information Technology Mae Fah Luang University 1st Semester 2009/ 2552 Outline Overview Lecture Contents Introduction

More information

Signals & Systems Handout #4

Signals & Systems Handout #4 Signals & Systems Handout #4 H-4. Elementary Discrete-Domain Functions (Sequences): Discrete-domain functions are defined for n Z. H-4.. Sequence Notation: We use the following notation to indicate the

More information

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 ) Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y

More information

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1 DIGITAL SPECTRAL ANALYSIS WITH APPLICATIONS S.LAWRENCE MARPLE, JR. SUMMARY This new book provides a broad perspective of spectral estimation techniques and their implementation. It concerned with spectral

More information

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011 UCSD ECE53 Handout #40 Prof. Young-Han Kim Thursday, May 9, 04 Homework Set #8 Due: Thursday, June 5, 0. Discrete-time Wiener process. Let Z n, n 0 be a discrete time white Gaussian noise (WGN) process,

More information

ECE-314 Fall 2012 Review Questions for Midterm Examination II

ECE-314 Fall 2012 Review Questions for Midterm Examination II ECE-314 Fall 2012 Review Questions for Midterm Examination II First, make sure you study all the problems and their solutions from homework sets 4-7. Then work on the following additional problems. Problem

More information

( ) John A. Quinn Lecture. ESE 531: Digital Signal Processing. Lecture Outline. Frequency Response of LTI System. Example: Zero on Real Axis

( ) John A. Quinn Lecture. ESE 531: Digital Signal Processing. Lecture Outline. Frequency Response of LTI System. Example: Zero on Real Axis John A. Quinn Lecture ESE 531: Digital Signal Processing Lec 15: March 21, 2017 Review, Generalized Linear Phase Systems Penn ESE 531 Spring 2017 Khanna Lecture Outline!!! 2 Frequency Response of LTI System

More information

Introduction to ARMA and GARCH processes

Introduction to ARMA and GARCH processes Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi Introduction to ARMA () and GARCH processes SNS Pisa 3 March 2010 1 / 24 Stationarity Strict stationarity: (X 1,

More information

Responses of Digital Filters Chapter Intended Learning Outcomes:

Responses of Digital Filters Chapter Intended Learning Outcomes: Responses of Digital Filters Chapter Intended Learning Outcomes: (i) Understanding the relationships between impulse response, frequency response, difference equation and transfer function in characterizing

More information

Damped Oscillators (revisited)

Damped Oscillators (revisited) Damped Oscillators (revisited) We saw that damped oscillators can be modeled using a recursive filter with two coefficients and no feedforward components: Y(k) = - a(1)*y(k-1) - a(2)*y(k-2) We derived

More information

UNIVERSITY OF OSLO. Please make sure that your copy of the problem set is complete before you attempt to answer anything.

UNIVERSITY OF OSLO. Please make sure that your copy of the problem set is complete before you attempt to answer anything. UNIVERSITY OF OSLO Faculty of mathematics and natural sciences Examination in INF3470/4470 Digital signal processing Day of examination: December 9th, 011 Examination hours: 14.30 18.30 This problem set

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models Patrick Herb Brandeis University Spring 2016 Patrick Herb (Brandeis University) Standard Univariate Models ECON/FIN

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Chap 2. Discrete-Time Signals and Systems

Chap 2. Discrete-Time Signals and Systems Digital Signal Processing Chap 2. Discrete-Time Signals and Systems Chang-Su Kim Discrete-Time Signals CT Signal DT Signal Representation 0 4 1 1 1 2 3 Functional representation 1, n 1,3 x[ n] 4, n 2 0,

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 III. Stationary models 1 Purely random process 2 Random walk (non-stationary)

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

x[n] = x a (nt ) x a (t)e jωt dt while the discrete time signal x[n] has the discrete-time Fourier transform x[n]e jωn

x[n] = x a (nt ) x a (t)e jωt dt while the discrete time signal x[n] has the discrete-time Fourier transform x[n]e jωn Sampling Let x a (t) be a continuous time signal. The signal is sampled by taking the signal value at intervals of time T to get The signal x(t) has a Fourier transform x[n] = x a (nt ) X a (Ω) = x a (t)e

More information

Need for transformation?

Need for transformation? Z-TRANSFORM In today s class Z-transform Unilateral Z-transform Bilateral Z-transform Region of Convergence Inverse Z-transform Power Series method Partial Fraction method Solution of difference equations

More information

Biomedical Signal Processing and Signal Modeling

Biomedical Signal Processing and Signal Modeling Biomedical Signal Processing and Signal Modeling Eugene N. Bruce University of Kentucky A Wiley-lnterscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane Singapore Toronto

More information

E : Lecture 1 Introduction

E : Lecture 1 Introduction E85.2607: Lecture 1 Introduction 1 Administrivia 2 DSP review 3 Fun with Matlab E85.2607: Lecture 1 Introduction 2010-01-21 1 / 24 Course overview Advanced Digital Signal Theory Design, analysis, and implementation

More information

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie WIENER FILTERING Presented by N.Srikanth(Y8104060), M.Manikanta PhaniKumar(Y8104031). INDIAN INSTITUTE OF TECHNOLOGY KANPUR Electrical Engineering dept. INTRODUCTION Noise is present in many situations

More information

DSP. Chapter-3 : Filter Design. Marc Moonen. Dept. E.E./ESAT-STADIUS, KU Leuven

DSP. Chapter-3 : Filter Design. Marc Moonen. Dept. E.E./ESAT-STADIUS, KU Leuven DSP Chapter-3 : Filter Design Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven marc.moonen@esat.kuleuven.be www.esat.kuleuven.be/stadius/ Filter Design Process Step-1 : Define filter specs Pass-band, stop-band,

More information

Lab 4: Quantization, Oversampling, and Noise Shaping

Lab 4: Quantization, Oversampling, and Noise Shaping Lab 4: Quantization, Oversampling, and Noise Shaping Due Friday 04/21/17 Overview: This assignment should be completed with your assigned lab partner(s). Each group must turn in a report composed using

More information

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 1. Construct a martingale difference process that is not weakly stationary. Simplest e.g.: Let Y t be a sequence of independent, non-identically

More information

6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m

6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m 6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m φ(ω) = γ ke jωk n k= n ρ, (23) jωk ke where γ k = γ k and ρ k = ρ k. Any continuous PSD can be approximated arbitrary

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

Discrete-Time Signals and Systems. Frequency Domain Analysis of LTI Systems. The Frequency Response Function. The Frequency Response Function

Discrete-Time Signals and Systems. Frequency Domain Analysis of LTI Systems. The Frequency Response Function. The Frequency Response Function Discrete-Time Signals and s Frequency Domain Analysis of LTI s Dr. Deepa Kundur University of Toronto Reference: Sections 5., 5.2-5.5 of John G. Proakis and Dimitris G. Manolakis, Digital Signal Processing:

More information