Spectral Analysis - determine, from finite set of samples, power in each harmonic of signals. For non-periodic signals, use Power Spectral Density.
|
|
- Jordan Sutton
- 6 years ago
- Views:
Transcription
1 Spectral Estimation Spectral Analysis - determine, from finite set of samples, power in each harmonic of signals. For non-periodic signals, use Power Spectral Density. Bank of Band Pass Filters Spectral Resolution - measure of how close in frequency 2 sinusoids can be before they merge. Spectral quality - measure of how good spectral estimation is in terms of mean and variance: Q = (E[Sxx(f)])2 var[s xx(f)] Itcanbeimprovedbyincreasingthenumberofsamples,M. Both resolution and Quality can t be improved at same time for given M. If resolution is improved by reducing BPF bandwith, then BPF impulse response gets longer so less unbiased samples available for estimation. 30 1
2 Spectral Estimators: Parametric or Non-parametric Non-parametric: require no assumptions about the data except (quasi-) stationarity. Parametric: based on model of data so their application is more restrictive. Advantage: when applicable, they give more accurate spectral estimate and for fixed M, can reduce bias and variance due to a-priori knowledge in model. Power Spectral Density: S xx (ω) is defined as DFT of the auto-correlation, R xx (m) =E[x(k)x(k+m)] S xx (f) = m= R xx (m)e jωmt W einer Khintchine theorem (1) R xx can be got from S xx by finding its Fourier coefficients: 31 R xx (m) = T 2π 2πT 0 S xx (ω)e jωmt dω 2
3 Classic Spectral Analysis Eq (1) requires infinite set R xx (m) tocalculates xx (ω) but only set of M samples available One approach is to form M-point DFT: X M (ω) = M 1 k=0 x(k)e jωkt An estimate of PSD is then formed by dividing energy at each frequency by no. of data points: - Periodogram: S xx (ω) = X M(ω) 2 M 32 3
4 Leakage and Windowing In practice only x(k), 0 <k<n available so x(k) = x(k)w(k), w(k) = 1 for 0 <k<n, else= 0 X(f) =X(f) W (f) = Sidelobes of W (f) cause leakage.5.5 X(α)W (f α)dα Eg: Signal with X(f) =1for f < 0.1, else =0, is windowed by rectangular window of length N=61 W (f) = N 1 k=0 x(k)e j2πfk = 1 e jωn sin(ωn/2) = e jω(n 1)/2 1 e jω sin(ω/2) Width of main lobe ω =4π/61 or f =2/61 << widthx(f) 33 4
5 Power Spectral Density For signal x(t), average power is: 1 T P x = lim x 2 (t)dt = S xx (f)df T 2T T S xx = F [R xx (τ)] = R xx (τ)e j2πfτ dτ Properties of P.S.D: 1. S xx is real & +ve 2. Average power in x(t) =R xx (0) = S xx(f)df 3. For x(t) real, S xx ( f) =S(f) P.S.D of Random Sequences: S xx = R xx (k)e j2πfk, 1 2 <f<1 2 if T s =1 E.g: random binary waveform R xx (τ) =1 τ /T, τ <T, =0elsewhere [ sinπft S xx = F [R xx (τ)] = T πft 34 ] 2 5
6 M.F. Spectral Estimation Ŝ xx = N 1 k= (N 1) ˆR xx (k)e j2πfk, 1 2 <f<1 2 for T s =1 where ˆR xx (k) = 1 N k 1 N k i=0 x(i)x(i k), k=0, 1,...N 1 1. With N data points we can estimate R xx only for k <N. 2. As k N, we use fewer points to obtain ˆR xx. So Variance increases as k N Mean = E[ ˆR xx ]=R xx (k), k < N So estimator is unbiased if k<n. Var[ ˆR xx ]= 1 N k σ4 x for Gaussian process whose variance is σ 2 x. As k N, var[ ˆR xx ]increases. Estimate Ŝxx has 2 main disadvantages: 1. Large computation needed for ˆR xx 2. Variance[Ŝxx] canbelarge. 35 6
7 Periodogram Another estimate for R xx is ˆR xx (k) = 1 N N k 1 i=0 x(i)x(i + k), k =0, 1,...N 1 = N k N ˆR xx (k) Define d(n) =1,n=0...N 1, =0, else ˆR xx (k) = 1 N n= [d(n)x(n)][d(n + k)x(n + k)] Ŝ xx (f) =F [ ˆRxx (k)] = ˆR xx (k)e j2πkf n= = [ 1 N n= n= [d(n)x(n)][d(n + k)x(n + k)]]e j2πkf = 1 N [ [d(n)x(n)e j2πnf ][ [d(n + k)x(n + k)e j2π(k+n)f ] n= n= = 1 N X(f)X(f) = 1 N X(f) 2 - Periodogram 36 7
8 Bias of Periodogram: E[Ŝ xx (f)] = E[ ˆR xx (k)e j2πkf ]= E[ ˆRxx (k)]e j2πkf n= n= = n= 1 N [ n= [d(n)x(n)][d(n + k)x(n + k)]]e j2πkf = R xx (k)e j2πfk [ d(n)d(n + k)]/n n= n= = w N (k)r xx (k)e j2πfk,w N (k) = d(n)d(n+k)]/n =1 k /N, k < N n= n= = F [w N (k).r xx (k)] = 1/2 1/2 S xx (α)w N (f α)dα where W N (f) =F [w N (k)] = 1 N [ sin(πfn) sinπf ] 2 Variance of Periogram: var[ˆŝ xx (f)] = Sxx(f)[1 2 + [ sin(πfn) ] 2 ] N Sxx 2 0 sinπf Quality Factor of Periodogram: Q P = [E[Ŝ xx (f)]] 2 var[ˆŝ xx (f)] = S2 xx Sxx 2 =1 37 8
9 Variance of Periodogram (contd).. For zero mean, white Gaussian sequence with variance σ 2 : var[ŝ xx (f)] = σ 4 var[ŝ normalised standard error, ɛ = xx(f)] S xx = σ2 σ 2 i.e var[ˆŝ xx (f)] does NOT 0asN Trade off between Bias and Variance: Ŝ xx (f) =F [ ˆRxx (k)] so examine ˆRxx (k) which estimates R xx (k) for k = 0, 1,...M. When M<<Nwegetgoodestimates of R xx (k) for k =0, 1,...M.. But bias of Ŝ xx (f) is larger since R xx is truncated for k>m. As M N, Bias of Ŝ xx (f) decreases (less truncation of ˆR xx (k)) but var[ ˆˆRxx (k)] increases when k N as fewer points used in estimator. 38 9
10 Smoothing of Spectral Estimates 1. Bartlett Method - Averaging Periodograms: take N(>> 1) samples, x(0)...x(n 1), divide into n sections, each of N/n points, form n different Ŝ xx (f) and average the n different estimators to form averaged spectral estimators S xx (f) = 1 n n Ŝ xx (f) k If Ŝ xx (f) k are independent, then var[ S xx (f)] will be reduced by factor n. But since fewer points used for Ŝ xx (f) k, W N/n (f) will be n times wider than W N (f) so bias will be larger. var[ S xx (f)] = 1 n 2 n k=1 k=1 var[ˆŝ xx (f) k ]= 1 n var[ˆŝ xx (f)] Q B = [E[ S xx (f)]] 2 var[ S xx (f)] = n, N Freq resolution of Bartlett estimate f =.9/M, M = N/n 39 Q B = n = N/M = N.9/ f =1.1N f 10
11 2. Welch Method: Average Modified Periodograms Two modifications to Bartlett method: 1. Overlap data segments, x i (k) = x(k + id), k = 0, 1...M 1; i = 0, 1...L 1 2. Data Segments windowed prior to computing P gram to give modified P gram: S xx (f) = 1 MU M 1 2 x i (k)w(k)e j2πkf k=0 Where U is normalisation factor for power in window, U = 1 M Welch estimate is average of these periodograms Mean: = 1/2 1/2 S W xx(f) = 1 L L 1 i=0 S xx (f) i E[Sxx W (f)] = 1 L 1 E[ L S xx (f) i ]=E[ S xx (f) i ] i=0 S xx (α)w (f α)dα, W (f) = 1 MU M 1 2 w(k)e j2πkf k=0 M 1 k=0 w2 (k) As N and M, E[S W xx(f)] S xx (f) 40 11
12 Variance of Welch Estimate var[sxx W (f)] = 1 L 1 L 1 L 2 E[ S xx (f) i Sxx (f) i ] {E[Sxx W (f)]}2 i=0 j=0 For no overlap (D=M, L=n): var[sxx W (f)] = 1 L var[ S xx (f) i ] 1 L S2 xx (f) Q W = L = N/M For 50% overlap (D=M/2, L=2n) with triangular window: var[s W xx (f)] = 9 8L S2 xx (f) Q W = 8L 9 = 16N 9M Spectral width of the triangular window at the 3dB points is f =1.28/M Q W =0.78N f for no overlap 41 =1.39N f for 50% overlap + triangular window 12
13 Blackman Tukey Estimate Window autocorrelation sequence, take Fourier tfm: S B xx (f) = M 1 m=(m 1) r xx (m)w(m)e j2πmf = 1/2 1/2 Ŝ xx (α)w M (f α)dα Effect of windowing r xx (m) istosmooths xx, thus decreasing variance but reducing resolution. 1/2 E[Sxx B (f)] = E[Ŝ xx (α)]w M (f α)dα 1/2 = 1/2 1/2 1/2 1/2 S xx (θ)w B (α θ)w M (f α)dαdθ 1/2 1/2 S xx (θ)w M (f θ)dθ if M << N var[s B xx (f)] 1 N 1/2 1/2 Sxx 2 (α)w M 2 (f α)dα Sxx(f)[ 2 1 N if W M (f) narrow c.f. S xx (f) 42 1/2 1/2 W 2 M (f θ)dθ] 13
14 Multiplying by w(k) and taking Fourier tfm: E[ S xx (f)] = 1/2 1/2 S xx (α)w M (f α)dα, W M = F [w(k)] Rectangular: w(k) =1for k M, =0for k >M W M (f) =sin[(2m +1)πf]/sinπf Bartlett: w(k) = 1 k /M for k M, = 0for k > M W M (f) =[1/M ][sin(mπf)/sinπf] 2 Blackman-Tukey: w(k) =.5(1 + cos(πk/m)) for k M, = 0for k >M W M (f) =.25[D M (2πf π/m)+d M (2πf + π/m)] + D M (2πf) 43 14
15 Quality of B-T Estimate Mean, E[S BT xx (f)] 1/2 1/2 S xx(θ)w M (f θ)dθ S xx (f) asn,ie asymptotically unbiased var[s B xx(f)] S 2 xx(f)[ 1 N M 1 m=(m 1) w 2 (m)] = Sxx 2 (f)2m N = Sxx(f) 2 2M 3N for rect. window for triang. window Q BT =1.5N/M for triangular window As window length is 2M 1, f =1.28/2M =.64/M Q BT = N f =2.34N f Estimate Bartlett Welch(50%overlap) Blackman Tukey QualityF actor 1.11N f 1.39N f 2.34N f 44 15
16 Parametric Spectral Estimation Model-free methods: simple, efficiently computed via FFT. - but need long data for good frequency response and suffer from spectral leakage. - Basic Limitation: r xx =0form N from S xx (f) = N 1 m= (N 1) r xx (k)e j2πfm Model-based methods don t need this assumption so avoid spectral leakage and give better resolution with less data if model fits well. Based on modelling data x(k) as o/p of H(z) = B(z) q A(z) = i=0 b iz i 1+ p i=1 a...(1) i iz x(k) = q b i w(k i) i=0 p a i x(k i)...(2) Then PSD is S xx (f) = H(f) 2 S ww (f). Assume w(k) is white noise with r ww (m) =σw 2 δ(m). Then S xx (f) =σw H(f) 2 2 = σw 2 B(f) A(f) 2...(3) i=1 To find S xx (f): 1. From x(k), estimate {a i } and {b i } of model 2. From these estimates, compute S xx (f) from(3)
17 Types of Model x(k) generated by pole-zero model (1) is called an ARMA process of order (p,q). If q =0,H(z) =b 0 /A(z) and x(k) is an AR process. If A(z) =1, H(z) =B(z) and x(k) is a MA process. Any ARMA, AR, or MA process can be represented by AR or MA model. AR Spectral Analysis p S ee (f) =σe 2 = H 1 (f) 2 S xx (f), H 1 =1 a i z i z=e j2πft The whitening filter H 1 (z) is a linear predictor: i=
18 a i are chosen to minimise the MSE, E[e 2 (k)]whichleadtoacnequations: Φ xx a = φ xx where Φ xx = E[x(k 1)x T (k 1), φ xx = E[x(k 1)x T (k 1)] Min MSE, σ 2 e = E[x2 (k)] a T φ xx To perform AR analysis directly on the data, use sum of squared error cost fn: ρ = k [x(k) a T x(k 1)] 2 Set of a which minimise ρ is provided by the deterministic form of ACN eqn: R x a = r x Choosing the limits for k makes asumptions about x(k) before x(0) and after x(n-1),which effects the accuracy of a. Eg assume k starts at 0. Then e(0) = x(0) a T x( 1) = x(0) which gives no information about a. Yet e(0) is weighted equally with errors from the middle of x(k). Similarly e(1) = x(1) a T x(0) = x(1) a 1 x(0) which gives information only about a 1. It is only when e(p) is considered that no assumptions about the data have to be made
19 AR Windows Errors at the end of the data cause similar effect for possible windowing variations which lead to four possible estimates of a: Lower Upper Limit N+p 1 N 1 Limit autocorrelation pre windowed k=0 k=p post windowed covariance Despite its inaccuracy, auto-correlation form is popular in real-time systems as R xx is Toeplitz. The minimum sq error ρ is used to get an estimate, ˆσ 2 e of the MSE by dividing ρ by the no. of errors. For covariance form: The AR(p) spectral estimate is ˆσ 2 e = N 1 k=p x2 (k) r T x a N 1 p 48 S xx (f) = ˆσ 2 e 1+ p i=1 a ie jiωt 2 19
20 AR Model Order If p is too low, spectrum is very smoothed and can t seperate close peaks If p is too high, spectrum may have spurious peaks and is inefficient σ 2 e decreases as the order of AR model increases. Rate of decrease can be monitored and pchosenwhenrateofdecreasebecomes slow but this is imprecise. Other methods for selecting p: 1.Final Prediction Error: order is selected to minimise FPE(p) =ˆσ 2 e(n + p +1)/(N p 1) 2. Akaike Information Criterion: order is selected to minimise AIC(p) =lnˆσ 2 e +2p/N 3. Criterion Autoregressive Transfer: order is selected to minimise 49 CAT(p) =(1/N ) p ((N j)/ˆσ ej 2 ) ((N p)/ˆσ2 e ) j=1 20
21 MA Model S xx (z) =σw 2 H(z)H(z 1 )=σw 2 B(z)B(z 1 ) A(z)A(z 1 ) = D(z) σ2 w C(z) q m= q d mz m = σw 2 p m= p c mz m where d m = q m k=0 b k b k+m, m q When a 1 = a 2 =...a p =0itcanbeshownthat and PSD for MA(q) process is: r xx (m) ={ σ2 w d m m q 0 m >q Sxx MA (f) = q m= q = BT modified PERIODOGRAM r xx (m)e 2jπfm i.e we dont need to solve for b i to get Sxx MA (f) which is identical to nonparametric PSD estimate. Since r xx (m) =0for m >q, the order of MA model may be determined by noting if r xx (m) 0 for large lags. If this is not the case, MA model will give poor frequency resolution
22 ARMA Model Mpy Eq (2) by x (k m) and take expected values: q p E[x(k)x (k m)] = b n E[w(k n)x (k m)] a n E[x(k n)x (k m)] n=0 n=1 p q r xx (m) = a n r xx (m n)+ b n r wx (m n) n=1 n=0 r wx = { σ2 W h( m) m 0 0 m > 0 p n=1 a nr xx (m n) m>q r xx (m) = p n=1 a nr xx (m n)+σw 2 q m n=0 b n+mh(m) 0 m q rxx( m) m > 0 Coeffs a n for m>qfound as in AR case. Then A(z) =1+ p n=1 a nz n, which can filter x(k) togivev(k) =x(k)+ p n=1 a nx(k n) Then apply MA method to v(k) to obtain: S MA vv (f) = q m= q r vv (m)e 2jπfm 53 Sxx ARMA Svv MA (f) = (f) 1+ p n=1 a ne 2jπfm 2 22
23 Maximum Likelihood Spectral Estimation Model Spectrum Analyser as tunable filter. To find power at frequency, ω, tune filter to ω and measure o/p power. Filter must meet 2 criteria: 1. If signal consists of nothing but sinusoid at freq, ω, o/p power = i/p power. 2. Any signal at other freqs must make the min possible contribution to o/p power (Min Variance). Filter o/p, y(n) = N 1 k=0 a kx(n k) and assume i/p consists of complex sinusoid of freq, ω, and other components: x(n) =Ae jωn + p(n) MV design criteria may be written as: 1. If p(n) =0,theny(n) =x(n) 2. Otherwise, o/p power of the filter is to be a minimum
24 Consider 1. It means that Divide by Ae jωn : Use Vectors: N 1 k=0 N 1 k=0 a k Ae jω(n k) = Ae jωn a k Ae jωk =1...(3) a =[a 0,a 1,...a N 1 ] T Then (3) becomes e a =1 Consider 2. Variance, σ 2 = E[y 2 (n)] e =[1,e jω,e 2jω...e j(n 1)ω ] T N 1 N 1 = E[ a i x(n i) a k x(n k)] i=0 k=0 Interchanging order of summations and expected values: σ 2 = N 1 i=0 N 1 k=0 a i a ke[x(n i)x(n k)] 54b = N 1 i=0 N 1 k=0 a i a kr i k = a Ra 24
25 Need to min σ 2 = a Ra subject to e a =1 Using Lagrange s method F (a) =a Ra λe a Use constraint to find λ: df da =[2a R λe ] T =0 a =.5λR 1 e e a =.5λe R 1 e =1 λ =2/(e R 1 e) a = R 1 e e R 1 e = R 1 e e R 1 e For these a, σ 2 is estimate of spectral power at ω σ 2 = a Ra = e R 1 RR 1 e (e R 1 e) 2 = 1 e R 1 e = P ML(ω) e R 1 e evaluated by multiplying elements along any diagonal of R 1 by the same e jiω e jkω = e j(k i)ω. If n = k-i, then e R 1 e = N 1 n= (N 1) ejnω ρ n, ρ n = sum of all elements of R 1 along diagonal n. This can be computed with DFT, so we don t have to evaluate 1 e R 1 e for every freq of interest. 54c 25
26 Maximum Entropy Spectral Est Traditional methods not suited to finite data Windowing assumes data outside window is 0 - an affront to common sense [Burg] Burg s ME method starts with known data and gets everything needed from that. Entropy is measure of uncertainty. Since we want to leave signal free outside window, we maximise entropy of power spectrum. Start with expression of entropy of spectrum and find spectrum that maximises this, subject to condition that spectrum be consistent with known autocorrelations
27 Entropy: Let x be discrete random variable and p(x i ) be probability that x takes value x i. Entropy of x is H(x) = x i p(x i )logp(x i ) E.g if x can take only 2 values x 0 and x 1 then: where p = p(x 0 ) H(x) = plogp (1 p)log(1 p) H(x) has the following properties: 1. H(x)is continuous fn of p(x i ) 2. H(x)is non-negative and is 0 only for cases where p(x i ) = 0 for all the values of x i but one. 3. H(x) is max when all values equally likely 4. H(x) increases with N, if N can take on N different values, all equally likely 5. H(x) is additive in following sense: suppose we partition possible values of x into 2 sets A and B. Then finding the value of x can be divided into 2 steps: determine whether x is in A or B; determine actual value of x given that it is known to be in A (or B). The entropies of these 2 steps will sum to H(x) 56 27
28 If x is cont random variable with pdf = f(x), then entropy is H(x) = f(x)logf(x)dx... (2) Base of log gives unit of entropy. For discrete r.v., base = 2 and unit is bit For cont. r.v., use natural logs and unit is nat -forc.r.vwithstddevσ, pdf with max entropy is Gaussian For zero-mean Gaussian pdf with std dev σ f(x) = 1 2πσ e (x2 /2σ 2 ) From (2): H(x) =ln( 2πeσ) nats For n-dimensional, 0-mean, mulitivariate Gaussian pdf with covariance matrix C: f(x) =[(2π) n/2 C ] 1/2 e.5(xt C 1 x) H (x) =ln( (2πe) n C )nats...(3) Avoid problem of non-convergence of H(x) as signal length increases by using entropy rate: H 1 (x) = lim n n ln( (2πe) n C ) =ln( 2πe)+ 1 2 lim ln( C )/n n If signal is stationary, C is Toeplitz and: lim ln( C )/n = 1 W lns(f)df n 2W W 57 H (x) =ln( 2πe)+ 1 W lns(f)df... (4) 4W W 28
29 Application of ME to Spectral Analysis Assume we know r n out to max lag p, none beyond p. If we knew all r n,we could find PSD from: S(f) = 1 2W n= r n e j2πfnt, T =1/(2W ) But few known r n are not enough; there are an infinite number number of spectra, depending on unknown r n. ME Method: Find S(f) which maximises H (x) subject to r n = 1 W S(f)e j2πfnt df, 2W W n p... (5) So (4) becomes: H (x) =ln( 2πe)+ 1 W 4W W 1 ln[ 2W n= r n e j2πfnt ]df H (x) = 1 W 1 [ r n 4W W 2W n= r n e j2πfnt ] 1 e j2πfnt df = 1 W 4W W 1 S(f) e j2πfnt df Constraint (5) means derivatives = 0 only w.r.t. r n, n > p But these derivatives are Fourier coeffs of 1/S(f). So, ME criterion means that Fourier expansion of 1/S(f) must have finite no. of terms. So S(f) must be of form: 58 S(f) =[ p n= p q n e j2πfnt ] 1... (6) 29
30 To find coefficients {q n } : Write (6) as z-tfm: S(f) =S D (z) z=e j2πfnt, wheres D (z) =1/Q(z) S D (z) =P/[A(z)A (z 1 )], wherep =1/q 0 and A(z) = If R(z) is z-tfm of r n, then (5) becomes: R(z) =P [A(z)A (z 1 )] p a i z i i=0 R(z)A(z) =P/A (z 1 )... (7) As P/A(z) is z-tfm of impulse response h(n) of casual system, P/A (z 1 )is z-tfm of h ( n). So: r n a n = h ( n) p a i r n i = h ( n)... (8) i= p Equation (8) to be satisfied for n=0 to p (to match r n out to r p ) But if h(n) is causal, h ( n) = 0 for +ve n. Hence p a i r n i = i= p { h (0), n =0 0, 1 n p...(9) These equations are similar to ACNE for linear predictor. Once we get {a} we can get power spectum: use P S(f) = A(z) 2... (9a) z=e j2πfnt and obtain S ME (f) from the reciprocal of the absolute-value-squared Fourier tfm of {a 0,a 1...a p, 0,...0} 59 30
31 Linear Prediction and Burg s Method Similarity of equation (9) to ACNE for Linear predictor can be used to develop Burg s Method. Recursive solution: a 0 (0) = 1 where e 0 = r 0 a n+1 (j) =a n (j)+k n+1 a n (n +1 j)... (10)a E n+1 = E n (1 Kn+1) 2...(10)b K n+1 = 1 E n n a n (i)r n+1 i... (11) i=0 So far we have assumed we know r n to find {a} But real r n may not be known; how to find {k} and {a}? Burg s Method: Suppose we need a way of minimizing E n that 1. is consistent with recursion (10) 2. gives us {k} directly, 3. does not rely on r n There is such a way based on recurrence relations among individual forward and backward prediction errors for order-p linear predictor: f p (n) =f p 1 (n)+k p b p 1 (n)... (12a) 60 b p (n) =b p 1 (n 1) + K p f p 1 (n 1)... (12b) 31
32 On each recursion step n, use Eq (12) to find value of K n that minimises sum of mean-squared forward and backward prediction errors E over sequence of length N: where = E = N 1 1 f 2 2(N m) n(j)+b 2 n(j +1) j=n N 1 1 [f n 1 (j)+k n b n 1 (j)] 2 +[b n 1 (j)+k n f n 1 (j)] 2 2(N m) j=n N 1 E 1 = K n [f 2 K n (N m) n 1(j)+b 2 n 1(j)] + 2f n 1 (j)b n 1 (j) j=n Setting this derivative to 0 gives: K n = 2P/Q...(13) P = Q = N 1 j=n N 1 j=n f n 1 (j)b n 1 (j) f 2 n 1 (j)+b2 n 1 (j) (13) is expression for K n, using errors known from previous step and doesn t need r. Having found new K n, we can find new {a} from (10) and new prediction errors from (12) To start the process, we observe that for n = 0, predicted values are all 0, so errors are simply the values of the sequence. Burg recursion then proceeds as follows: 61 32
33 Burg s Recursion: 1. For n = 0: 2. For n=1 to p: f 0 (j) =y(j),b 0 (j) =y(j 1),j =1 to N, a 0 (0) = 1 (a) K n = 2P/Q from (13) (b) a n (n) =K n and a n (0) = 1 (c) From (10a), for i=1 to n-1 (d) From (12), for j = n to n-1 a n (i) =a n 1 (i)+k n a n 1 (n i) f n (j) =f n 1 (j)+k n b n 1 (j) b n (j) =b n 1 (j 1) + K n f n 1 (j 1) 3. Using {a}, obtain S(f) from (9a) Step 3 is carried out by finding the reciprocal of the squared magnitude of the DFT of {a 0,a 1,...a p, 0...0} 62 33
34 The Burg method dosen t need the calculation of {r} but if they are required, they can be got as a by-product of the Burg recursion. To see how this can be done, refer to Eq (11) rewritten as: K n = 1 n 1 a n 1 (i)r n i E n 1 i=0 n 1 r n = E n 1 K n a n 1 (i)r n i i=1 = K n n i=1 n 1 a n 1 (n i)r n i a n 1 (i)r n i i=1 n = a n (i)r n i i=1 So we can include computation of r into Burg recursion by adding to step 1: N r 0 = y 2 (j) j=1 andtostep2: r n = 63 n a n (i)r n i i=1 34
35 Performance of ME Spectral Estimate E.g for 3 sinusoids at 764, 955 and 1019 Hz 64 35
36 64a 36
37 64b 37
Practical Spectral Estimation
Digital Signal Processing/F.G. Meyer Lecture 4 Copyright 2015 François G. Meyer. All Rights Reserved. Practical Spectral Estimation 1 Introduction The goal of spectral estimation is to estimate how the
More informationSPECTRUM. Deterministic Signals with Finite Energy (l 2 ) Deterministic Signals with Infinite Energy N 1. n=0. N N X N(f) 2
SPECTRUM Deterministic Signals with Finite Energy (l 2 ) Energy Spectrum: S xx (f) = X(f) 2 = 2 x(n)e j2πfn n= Deterministic Signals with Infinite Energy DTFT of truncated signal: X N (f) = N x(n)e j2πfn
More informationComplement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises
Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Random Signals Analysis (MVE136) Mats Viberg Department of Signals and Systems Chalmers University of Technology 412
More informationII. Nonparametric Spectrum Estimation for Stationary Random Signals - Non-parametric Methods -
II. onparametric Spectrum Estimation for Stationary Random Signals - on-parametric Methods - - [p. 3] Periodogram - [p. 12] Periodogram properties - [p. 23] Modified periodogram - [p. 25] Bartlett s method
More informationLaboratory Project 2: Spectral Analysis and Optimal Filtering
Laboratory Project 2: Spectral Analysis and Optimal Filtering Random signals analysis (MVE136) Mats Viberg and Lennart Svensson Department of Signals and Systems Chalmers University of Technology 412 96
More informationComplement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises
Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Random Processes With Applications (MVE 135) Mats Viberg Department of Signals and Systems Chalmers University of Technology
More informationLecture 4 - Spectral Estimation
Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated
More informationNonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied
Linear Signal Models Overview Introduction Linear nonparametric vs. parametric models Equivalent representations Spectral flatness measure PZ vs. ARMA models Wold decomposition Introduction Many researchers
More informationSGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection
SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28
More informationCorrelator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi
Chapter 8 Correlator I. Basics D. Anish Roshi 8.1 Introduction A radio interferometer measures the mutual coherence function of the electric field due to a given source brightness distribution in the sky.
More information= 4. e t/a dt (2) = 4ae t/a. = 4a a = 1 4. (4) + a 2 e +j2πft 2
ECE 341: Probability and Random Processes for Engineers, Spring 2012 Homework 13 - Last homework Name: Assigned: 04.18.2012 Due: 04.25.2012 Problem 1. Let X(t) be the input to a linear time-invariant filter.
More informationReview of Fundamentals of Digital Signal Processing
Chapter 2 Review of Fundamentals of Digital Signal Processing 2.1 (a) This system is not linear (the constant term makes it non linear) but is shift-invariant (b) This system is linear but not shift-invariant
More informationRandom signals II. ÚPGM FIT VUT Brno,
Random signals II. Jan Černocký ÚPGM FIT VUT Brno, cernocky@fit.vutbr.cz 1 Temporal estimate of autocorrelation coefficients for ergodic discrete-time random process. ˆR[k] = 1 N N 1 n=0 x[n]x[n + k],
More informationCCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York
BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not
More informationEEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:
EEM 409 Random Signals Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Consider a random process of the form = + Problem 2: X(t) = b cos(2π t + ), where b is a constant,
More informationPart III Spectrum Estimation
ECE79-4 Part III Part III Spectrum Estimation 3. Parametric Methods for Spectral Estimation Electrical & Computer Engineering North Carolina State University Acnowledgment: ECE79-4 slides were adapted
More informationSummary notes for EQ2300 Digital Signal Processing
Summary notes for EQ3 Digital Signal Processing allowed aid for final exams during 6 Joakim Jaldén, 6-- Prerequisites The DFT and the FFT. The discrete Fourier transform The discrete Fourier transform
More informationStatistical and Adaptive Signal Processing
r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory
More informationParametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)
Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides
More informationLinear models. Chapter Overview. Linear process: A process {X n } is a linear process if it has the representation.
Chapter 2 Linear models 2.1 Overview Linear process: A process {X n } is a linear process if it has the representation X n = b j ɛ n j j=0 for all n, where ɛ n N(0, σ 2 ) (Gaussian distributed with zero
More informationMassachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.34: Discrete-Time Signal Processing OpenCourseWare 006 ecture 8 Periodogram Reading: Sections 0.6 and 0.7
More informationAdvanced Digital Signal Processing -Introduction
Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary
More informationCHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME
CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 13 CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME When characterizing or modeling a random variable, estimates
More informationLinear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing
Linear Stochastic Models Special Types of Random Processes: AR, MA, and ARMA Digital Signal Processing Department of Electrical and Electronic Engineering, Imperial College d.mandic@imperial.ac.uk c Danilo
More informationParametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes
Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted
More informationRadar Systems Engineering Lecture 3 Review of Signals, Systems and Digital Signal Processing
Radar Systems Engineering Lecture Review of Signals, Systems and Digital Signal Processing Dr. Robert M. O Donnell Guest Lecturer Radar Systems Course Review Signals, Systems & DSP // Block Diagram of
More informationParametric Method Based PSD Estimation using Gaussian Window
International Journal of Engineering Trends and Technology (IJETT) Volume 29 Number 1 - November 215 Parametric Method Based PSD Estimation using Gaussian Window Pragati Sheel 1, Dr. Rajesh Mehra 2, Preeti
More informationContents. Digital Signal Processing, Part II: Power Spectrum Estimation
Contents Digital Signal Processing, Part II: Power Spectrum Estimation 5. Application of the FFT for 7. Parametric Spectrum Est. Filtering and Spectrum Estimation 7.1 ARMA-Models 5.1 Fast Convolution 7.2
More informationEE 438 Essential Definitions and Relations
May 2004 EE 438 Essential Definitions and Relations CT Metrics. Energy E x = x(t) 2 dt 2. Power P x = lim T 2T T / 2 T / 2 x(t) 2 dt 3. root mean squared value x rms = P x 4. Area A x = x(t) dt 5. Average
More informationAdaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.
Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is
More informationChirp Transform for FFT
Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a
More information7 The Waveform Channel
7 The Waveform Channel The waveform transmitted by the digital demodulator will be corrupted by the channel before it reaches the digital demodulator in the receiver. One important part of the channel
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationMultimedia Signals and Systems - Audio and Video. Signal, Image, Video Processing Review-Introduction, MP3 and MPEG2
Multimedia Signals and Systems - Audio and Video Signal, Image, Video Processing Review-Introduction, MP3 and MPEG2 Kunio Takaya Electrical and Computer Engineering University of Saskatchewan December
More informationADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering
Advanced Digital Signal rocessing (18-792) Spring Fall Semester, 201 2012 Department of Electrical and Computer Engineering ROBLEM SET 8 Issued: 10/26/18 Due: 11/2/18 Note: This problem set is due Friday,
More informationMel-Generalized Cepstral Representation of Speech A Unified Approach to Speech Spectral Estimation. Keiichi Tokuda
Mel-Generalized Cepstral Representation of Speech A Unified Approach to Speech Spectral Estimation Keiichi Tokuda Nagoya Institute of Technology Carnegie Mellon University Tamkang University March 13,
More informationDigital Signal Processing. Lecture Notes and Exam Questions DRAFT
Digital Signal Processing Lecture Notes and Exam Questions Convolution Sum January 31, 2006 Convolution Sum of Two Finite Sequences Consider convolution of h(n) and g(n) (M>N); y(n) = h(n), n =0... M 1
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 4 Ergodic Random Processes, Power Spectrum Linear Systems 0 c 2011, Georgia Institute of Technology (lect4 1) Ergodic Random Processes An ergodic random process is one
More informationSignal Processing Signal and System Classifications. Chapter 13
Chapter 3 Signal Processing 3.. Signal and System Classifications In general, electrical signals can represent either current or voltage, and may be classified into two main categories: energy signals
More informationLecture 7 Discrete Systems
Lecture 7 Discrete Systems EE 52: Instrumentation and Measurements Lecture Notes Update on November, 29 Aly El-Osery, Electrical Engineering Dept., New Mexico Tech 7. Contents The z-transform 2 Linear
More information1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =
ENEE630 ADSP Part II w/ solution. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix) R a = 4 4 4,R b = 0 0,R c = j 0 j 0 j 0 j 0 j,r d = 0 0 0
More informationMechanical Vibrations Chapter 13
Mechanical Vibrations Chapter 3 Peter Avitabile Mechanical Engineering Department University of Massachusetts Lowell Dr. Peter Avitabile Random Vibrations Up until now, everything analyzed was deterministic.
More informationOn Moving Average Parameter Estimation
On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)
More informationReview of Fundamentals of Digital Signal Processing
Solution Manual for Theory and Applications of Digital Speech Processing by Lawrence Rabiner and Ronald Schafer Click here to Purchase full Solution Manual at http://solutionmanuals.info Link download
More informationVoiced Speech. Unvoiced Speech
Digital Speech Processing Lecture 2 Homomorphic Speech Processing General Discrete-Time Model of Speech Production p [ n] = p[ n] h [ n] Voiced Speech L h [ n] = A g[ n] v[ n] r[ n] V V V p [ n ] = u [
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More informationSinusoidal Modeling. Yannis Stylianou SPCC University of Crete, Computer Science Dept., Greece,
Sinusoidal Modeling Yannis Stylianou University of Crete, Computer Science Dept., Greece, yannis@csd.uoc.gr SPCC 2016 1 Speech Production 2 Modulators 3 Sinusoidal Modeling Sinusoidal Models Voiced Speech
More informationDFT & Fast Fourier Transform PART-A. 7. Calculate the number of multiplications needed in the calculation of DFT and FFT with 64 point sequence.
SHRI ANGALAMMAN COLLEGE OF ENGINEERING & TECHNOLOGY (An ISO 9001:2008 Certified Institution) SIRUGANOOR,TRICHY-621105. DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING UNIT I DFT & Fast Fourier
More information6.435, System Identification
System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency
More informationSignals and Spectra - Review
Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs
More informationENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University
ENSC327 Communications Systems 2: Fourier Representations Jie Liang School of Engineering Science Simon Fraser University 1 Outline Chap 2.1 2.5: Signal Classifications Fourier Transform Dirac Delta Function
More informationChapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析
Chapter 9 Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 1 LPC Methods LPC methods are the most widely used in speech coding, speech synthesis, speech recognition, speaker recognition and verification
More informationCh. 15 Wavelet-Based Compression
Ch. 15 Wavelet-Based Compression 1 Origins and Applications The Wavelet Transform (WT) is a signal processing tool that is replacing the Fourier Transform (FT) in many (but not all!) applications. WT theory
More informationNon-parametric identification
Non-parametric Non-parametric Transient Step-response using Spectral Transient Correlation Frequency function estimate Spectral System Identification, SSY230 Non-parametric 1 Non-parametric Transient Step-response
More informationDigital Signal Processing Course. Nathalie Thomas Master SATCOM
Digital Signal Processing Course Nathalie Thomas Master SATCOM 2018 2019 Chapitre 1 Introduction 1.1 Signal digitization : sampling and quantization 1.1.1 Signal sampling Let's call x(t) the original deterministic
More informationLike bilateral Laplace transforms, ROC must be used to determine a unique inverse z-transform.
Inversion of the z-transform Focus on rational z-transform of z 1. Apply partial fraction expansion. Like bilateral Laplace transforms, ROC must be used to determine a unique inverse z-transform. Let X(z)
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:305:45 CBC C222 Lecture 8 Frequency Analysis 14/02/18 http://www.ee.unlv.edu/~b1morris/ee482/
More informationGATE EE Topic wise Questions SIGNALS & SYSTEMS
www.gatehelp.com GATE EE Topic wise Questions YEAR 010 ONE MARK Question. 1 For the system /( s + 1), the approximate time taken for a step response to reach 98% of the final value is (A) 1 s (B) s (C)
More informationSTAD57 Time Series Analysis. Lecture 23
STAD57 Time Series Analysis Lecture 23 1 Spectral Representation Spectral representation of stationary {X t } is: 12 i2t Xt e du 12 1/2 1/2 for U( ) a stochastic process with independent increments du(ω)=
More informationLecture 4: FT Pairs, Random Signals and z-transform
EE518 Digital Signal Processing University of Washington Autumn 2001 Dept. of Electrical Engineering Lecture 4: T Pairs, Rom Signals z-transform Wed., Oct. 10, 2001 Prof: J. Bilmes
More informationTHE PROCESSING of random signals became a useful
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 58, NO. 11, NOVEMBER 009 3867 The Quality of Lagged Products and Autoregressive Yule Walker Models as Autocorrelation Estimates Piet M. T. Broersen
More informationDiscrete Time Fourier Transform (DTFT)
Discrete Time Fourier Transform (DTFT) 1 Discrete Time Fourier Transform (DTFT) The DTFT is the Fourier transform of choice for analyzing infinite-length signals and systems Useful for conceptual, pencil-and-paper
More informationLecture 14: Windowing
Lecture 14: Windowing ECE 401: Signal and Image Analysis University of Illinois 3/29/2017 1 DTFT Review 2 Windowing 3 Practical Windows Outline 1 DTFT Review 2 Windowing 3 Practical Windows DTFT Review
More informationWiener-Khintchine Theorem. White Noise. Wiener-Khintchine Theorem. Power Spectrum Estimation
Wiener-Khintchine Theorem Let x(n) be a WSS random process with autocorrelation sequence r xx (m) =E[x(n + m)x (n)] The power spectral density is defined as the Discrete Time Fourier Transform of the autocorrelation
More informationHST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007
MIT OpenCourseWare http://ocw.mit.edu HST.58J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationE : Lecture 1 Introduction
E85.2607: Lecture 1 Introduction 1 Administrivia 2 DSP review 3 Fun with Matlab E85.2607: Lecture 1 Introduction 2010-01-21 1 / 24 Course overview Advanced Digital Signal Theory Design, analysis, and implementation
More information16.584: Random (Stochastic) Processes
1 16.584: Random (Stochastic) Processes X(t): X : RV : Continuous function of the independent variable t (time, space etc.) Random process : Collection of X(t, ζ) : Indexed on another independent variable
More information6.3 Forecasting ARMA processes
6.3. FORECASTING ARMA PROCESSES 123 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss
More informationEE-210. Signals and Systems Homework 7 Solutions
EE-20. Signals and Systems Homework 7 Solutions Spring 200 Exercise Due Date th May. Problems Q Let H be the causal system described by the difference equation w[n] = 7 w[n ] 2 2 w[n 2] + x[n ] x[n 2]
More informationEE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet
EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, 2007 Cover Sheet Test Duration: 120 minutes. Open Book but Closed Notes. Calculators allowed!! This test contains five problems. Each of
More informationSNR Calculation and Spectral Estimation [S&T Appendix A]
SR Calculation and Spectral Estimation [S&T Appendix A] or, How not to make a mess of an FFT Make sure the input is located in an FFT bin 1 Window the data! A Hann window works well. Compute the FFT 3
More information7. Line Spectra Signal is assumed to consists of sinusoidals as
7. Line Spectra Signal is assumed to consists of sinusoidals as n y(t) = α p e j(ω pt+ϕ p ) + e(t), p=1 (33a) where α p is amplitude, ω p [ π, π] is angular frequency and ϕ p initial phase. By using β
More informationEAS 305 Random Processes Viewgraph 1 of 10. Random Processes
EAS 305 Random Processes Viewgraph 1 of 10 Definitions: Random Processes A random process is a family of random variables indexed by a parameter t T, where T is called the index set λ i Experiment outcome
More informationECE 636: Systems identification
ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental
More informationLeast Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on
Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall 2015 1 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given
More informationECE534, Spring 2018: Solutions for Problem Set #5
ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described
More information3F1 Random Processes Examples Paper (for all 6 lectures)
3F Random Processes Examples Paper (for all 6 lectures). Three factories make the same electrical component. Factory A supplies half of the total number of components to the central depot, while factories
More informationChapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe
System Identification written by L. Ljung, Prentice Hall PTR, 1999 Chapter 6: Nonparametric Time- and Frequency-Domain Methods Problems presented by Uwe System Identification Problems Chapter 6 p. 1/33
More informationIntroduction to Probability and Stochastic Processes I
Introduction to Probability and Stochastic Processes I Lecture 3 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark Slides
More informationAutomatic Autocorrelation and Spectral Analysis
Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution
More informationFourier Methods in Digital Signal Processing Final Exam ME 579, Spring 2015 NAME
Fourier Methods in Digital Signal Processing Final Exam ME 579, Instructions for this CLOSED BOOK EXAM 2 hours long. Monday, May 8th, 8-10am in ME1051 Answer FIVE Questions, at LEAST ONE from each section.
More informationProbability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver
Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation
More informationINTRODUCTION TO DELTA-SIGMA ADCS
ECE37 Advanced Analog Circuits INTRODUCTION TO DELTA-SIGMA ADCS Richard Schreier richard.schreier@analog.com NLCOTD: Level Translator VDD > VDD2, e.g. 3-V logic? -V logic VDD < VDD2, e.g. -V logic? 3-V
More informationTime series models in the Frequency domain. The power spectrum, Spectral analysis
ime series models in the Frequency domain he power spectrum, Spectral analysis Relationship between the periodogram and the autocorrelations = + = ( ) ( ˆ α ˆ ) β I Yt cos t + Yt sin t t= t= ( ( ) ) cosλ
More informationGeneralized Sidelobe Canceller and MVDR Power Spectrum Estimation. Bhaskar D Rao University of California, San Diego
Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books 1. Optimum Array Processing, H. L. Van Trees 2.
More informationTime Series Examples Sheet
Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,
More informationStationary Graph Processes: Nonparametric Spectral Estimation
Stationary Graph Processes: Nonparametric Spectral Estimation Santiago Segarra, Antonio G. Marques, Geert Leus, and Alejandro Ribeiro Dept. of Signal Theory and Communications King Juan Carlos University
More information2.161 Signal Processing: Continuous and Discrete Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 2.161 Signal Processing: Continuous and Discrete Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Massachusetts
More informationThis examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS
THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 08 December 2009 This examination consists of
More informationThis examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS
THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of
More informationLinear Model Under General Variance Structure: Autocorrelation
Linear Model Under General Variance Structure: Autocorrelation A Definition of Autocorrelation In this section, we consider another special case of the model Y = X β + e, or y t = x t β + e t, t = 1,..,.
More informationL6: Short-time Fourier analysis and synthesis
L6: Short-time Fourier analysis and synthesis Overview Analysis: Fourier-transform view Analysis: filtering view Synthesis: filter bank summation (FBS) method Synthesis: overlap-add (OLA) method STFT magnitude
More informationFourier Analysis Linear transformations and lters. 3. Fourier Analysis. Alex Sheremet. April 11, 2007
Stochastic processes review 3. Data Analysis Techniques in Oceanography OCP668 April, 27 Stochastic processes review Denition Fixed ζ = ζ : Function X (t) = X (t, ζ). Fixed t = t: Random Variable X (ζ)
More informationTime Series Examples Sheet
Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,
More informationDiscrete Fourier Transform
Discrete Fourier Transform Valentina Hubeika, Jan Černocký DCGM FIT BUT Brno, {ihubeika,cernocky}@fit.vutbr.cz Diskrete Fourier transform (DFT) We have just one problem with DFS that needs to be solved.
More information7.16 Discrete Fourier Transform
38 Signals, Systems, Transforms and Digital Signal Processing with MATLAB i.e. F ( e jω) = F [f[n]] is periodic with period 2π and its base period is given by Example 7.17 Let x[n] = 1. We have Π B (Ω)
More informationParametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes
1 Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering University of Maryland, College Park
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More information