Harmonic Analysis: I. consider following two part stationary process: X t = µ + D l cos (2πf l t t + φ l ) {z }

Size: px
Start display at page:

Download "Harmonic Analysis: I. consider following two part stationary process: X t = µ + D l cos (2πf l t t + φ l ) {z }"

Transcription

1 Harmonic Analysis: I consider following two part stationary process: LX X t = µ + D l cos (2πf l t t + φ l ) l=1 {z } (1) part (1) is harmonic process (Equation (37c)) + η t {z} (2) µ, L, D l s and f l s real-valued constants φ l s are IID RVs with uniform PDF over [ π, π] part (2) has purely continuous spectrum (Chpt. 6, 7, 8 & 9) mean zero, variance σ 2 η, SDF denoted as S η ( ) η t s and φ l s independent of each other called background continuum X 1

2 Harmonic Analysis: II basic properties of E{X t } = µ X t = µ + LX l=1 D l cos (2πf l t t + φ l ) + η t var {X t } = P L l=1 D 2 l /2 + σ2 η can define SDF using δ( ) functions: S X (f) LX l=1 D 2 l 4 [δ(f f l) + δ(f + f l )] + S η (f) X 2

3 three cases of interest Harmonic Analysis: III 1. {η t } absent S X ( ) called line (or purely discrete) spectrum φ l s fixed for each realization given sufficient data, can in theory determine µ etc. perfectly (not really a statistical model) useful in, e.g., tidal analysis 2. {η t } = { t }, a white noise process S X ( ) called discrete spectrum (a) f l s known: estimate µ, D l s, φ l s and σ 2 (b) f l s unknown: estimate f l s also need f l s, D l s and σ 2 to form SDF X 3

4 Harmonic Analysis: IV 3. {η t } is colored noise S X ( ) called mixed spectrum (a) f l s known: estimate µ, D l s, φ l s and S η ( ) (b) f l s unknown: estimate f l s also need f l s, D l s & S η ( ) to form SDF start with 2(a) with L = 1, i.e., X t = µ + D 1 cos (2πf 1 t t + φ 1 ) + t, where f 1 is assumed to be known, and { t } is white noise with mean zero and variance σ 2 X 4

5 Discrete Spectrum with Known f 1 : I use cos(x + y) = cos(x) cos(y) sin(x) sin(y) to rewrite 2(a): X t = µ + D 1 cos (2πf 1 t t + φ 1 ) + t = µ + A 1 cos (2πf 1 t t ) + B 1 sin (2πf 1 t t ) + t, where A 1 D 1 cos (φ 1 ) and B 1 D 1 sin (φ 1 ) note: φ 1 is fixed for each realization of X 0,..., X N 1 our task: estimate µ, D 1, φ 1, σ 2 or, equivalently, µ, A 1, B 1, σ 2 X 5

6 X 0 X 1 X 2. X N 1 Discrete Spectrum with Known f 1 : II for given realization, A 1 and B 1 are constants, so X t = µ + A 1 cos (2πf 1 t t ) + B 1 sin (2πf 1 t t ) + t, is a linear regression model given X 0, X 1,..., X N 1, can write = 1 cos(2πf 1 t ) sin(2πf 1 t ) 1 cos(4πf 1 t ) sin(4πf 1 t )... 1 cos([n 1]2πf 1 t ) sin([n 1]2πf 1 t ) can express the above in matrix notation as let kxk 2 P N 1 t=0 X2 t X = Hβ + µ A 1 B 1 + denote squared Euclidean norm of X N 1 X 6

7 Least Squares Estimation of β: I estimate β by vector ˆβ minimizing SS(β) kx Hβk 2 = N 1 X t=0 [X t µ A 1 cos (2πf 1 t t ) B 1 sin (2πf 1 t t )] 2 ˆβ called least squares estimator of β to find ˆβ, differentiate above with respect to β SS(β) = 2H T (X Hβ) β and set to zero vector to obtain so-called normal equations: H T H ˆβ = H T X X 7

8 Least Squares Estimation of β: II least squares estimator of σ 2 given by ˆσ 2 = 1 X H N 3 ˆβ 2 (note: 3 = number of estimated parameters) least squares theory says E{ ˆβ} = β and E{ˆσ 2 } = σ 2 X 8

9 Least Squares Estimation of β: III suppose f 1 is a Fourier frequency k/n t with 1 k < N/2 ˆβ simplifies greatly: ˆµ = X,  1 = 2 N N 1 X t=0 X t cos (2πf 1 t t ) and ˆB 1 = 2 N N 1 X t=0 var {ˆµ} = σ 2 /N; var {Â1} = var { ˆB 1 } = 2σ 2 /N ˆµ,  1 & ˆB 1 are pairwise uncorrelated X t sin (2πf 1 t t ) if f 1 not a Fourier frequency, above are still good approximations as long as f 1 not too close to 0 or f N if L > 1 and f l s are all Fourier frequencies, then Âl s and ˆB l s have forms analogous to above Âl s and ˆB l s are pairwise uncorrelated X 9

10 Connection to Periodogram: I recall A 1 D 1 cos (φ 1 ) and B 1 D 1 sin (φ 1 ) since A B2 1 = D2 1, define ˆD 1 2 Â2 1 + ˆB 1 2 so 2 ˆD 1 2 = 2 N 1 X X t cos (2πf N 1 t t ) + 2 N 1 X X t sin (2πf N 1 t t ) = 4 N 2 t=0 X Ø N 1 t=0 X t e i2πf 1t t Ø 2 t=0 = 4 N t Ŝ (p) (f 1 ) note that E{Â2 1 } = A2 1 + var {Â1} = A σ2 /N; similarly E{ ˆB 2 1 } = B σ2 /N hence E{ ˆD 2 1 } = A2 1 + B σ2 /N = D σ2 /N 2 X 10

11 Connection to Periodogram: II use Ŝ(p) (f 1 ) = N t ˆD2 1 /4 and E{ ˆD 1 2} = D σ2 /N to get! E{Ŝ(p) (f 1 )} = N t 4 E{ ˆD 1 2 } = N t D σ2 = N t N 4 D2 1 +σ2 t expected value increases with sample size N if L > 1 & f l s are Fourier frequencies, then E{Ŝ(p) (f l )} = N t 4 D2 l + σ2 t if Fourier frequency f k not equal to f l in model, add dummy term to model with D k = 0 to get E{Ŝ(p) (f k )} = σ 2 t, which is independent of N X 11

12 Determining Unknown f l s: I if unknown f l s are Fourier frequencies f k, can identify by searching Ŝ(p) (f k ) for large values if f l s not necessarily so, can use spectral representation theorem to show that LX E{Ŝ(p) (f)} = σ 2 D 2 t + l 4 [F(f + f l) + F(f f l )] l=1 since Fejér s kernel F( ) δ( ) as N, large values of Ŝ (p) ( ) indicate possible f l s X 12

13 Determining Unknown f l s: II Q: how well can f l be estimated from periodogram? if L = 1 and ˆf 1 is maximum value of Ŝ(p) ( ), can argue that E{ ˆf 1 } = f 1 + O (1/N) and var { ˆf 3 1 } N 3 Rπ 2 2, t where R D1 2/(2σ2 ) regard this as a signal-to-noise ratio in view of var {X t } = D1 2/2 + σ2 if L > 1, above still holds as long as f l s are well separated since mean square error = variance + squared bias, have MSE { ˆf 3 1 } N 3 Rπ O 1/N 2, t so MSE is dominated by bias approximately have MSE f 1/N t X 13

14 Determining Unknown f l s: III practical problems with using Ŝ(p) ( ) E{Ŝ(p) ( )} can be complicated for L > 1 = heavy up- observe Ŝ(p) ( ), not E{Ŝ(p) ( )}: noise χ 2 2 per tails = need for statistical tests! consider periodogram for Kay & Marple (1981) example: X t = 0.9 cos (2πt/7.5)+0.9 cos (2πt/5.3 + π/2)+cos (2πt/3)+ t, where { t } is zero mean Gaussian white noise with variance σ 2 for t = 1, 2,..., 16 (a small sample size!), get 2+ cycles of f = 1/7.5 = (thin solid) 3 cycles of f = 1/5.3 = (dashed) 5+ cycles of f = 1/3 (thick solid) X 14

15 Noise-free Kay & Marple (K M) Time Series, N = t X 15

16 periodogram Ŝ (p) (f k ) vs. f k = k N for K M Series, N = 16, No Noise f X 16

17 periodogram Ŝ (p) ( f k ) vs. f k = k 2N for K M Series, N = 16, No Noise f X 17

18 periodogram Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 16, No Noise f X 17

19 K M Time Series, N = 16, σ 2 = x x 0 2 x x x x x x x x x x x x x x t X 18

20 K M Time Series, N = 16, σ 2 = x x x x x x x x x x x x x x x x t X 18

21 K M Time Series, N = 16, σ 2 = x x x x x x x x x x x x x x x x t X 18

22 periodogram Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 16, σ2 = f X 18

23 periodogram Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 16, σ2 = f X 19

24 periodogram Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 16, σ2 = f X 19

25 K M Time Series, N = 16, σ 2 = t X 20

26 K M Time Series, N = 32, σ 2 = t X 20

27 K M Time Series, N = 64, σ 2 = t X 20

28 K M Time Series, N = 128, σ 2 = t X 20

29 periodogram Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 16, σ2 = f X 21

30 periodogram Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 32, σ2 = f X 21

31 periodogram Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 64, σ2 = f X 21

32 periodogram Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 128, σ2 = f X 21

33 periodogram (db) Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 16, σ2 = f X 22

34 periodogram (db) Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 32, σ2 = f X 22

35 periodogram (db) Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 64, σ2 = f X 22

36 periodogram (db) Ŝ (p) ( f k ) vs. f k = k 16N for K M Series, N = 128, σ2 = f X 22

37 Determining Unknown f l s: IV recall that E{Ŝ(p) (f)} = σ 2 t + LX l=1 D 2 l 4 [F(f + f l) + F(f f l )] poor sidelobes of Fejér s kernel suggest tapering if use taper {h t } H( ) to form Ŝ(d) ( ), have E{Ŝ(d) (f)} = σ 2 t + LX l=1 D 2 l 4 [H(f + f l) + H(f f l )], where H( ) = H( ) 2 is spectral window for {h t } X 23

38 Determining Unknown f l s: V following plots show three Ŝ(d) ( ) s for X 0,..., X 255 from X t = cos (0.2943πt + φ 1 ) + cos (0.3333πt + φ 2 ) cos (0.3971πt + φ 3 ) + t, where { t } is zero mean Gaussian white noise with variance σ 2 = = 100 db indicated by horizontal dashed line (vertical dashed lines indicate three f l s) X 24

39 direct SDF estimate (db) Periodogram and Central Lobe of F( ) f X 25

40 direct SDF estimate (db) Ŝ (d) ( ) with NW = 2 Slepian and Central Lobe of H( ) f X 26

41 direct SDF estimate (db) Ŝ (d) ( ) with NW = 4 Slepian and Central Lobe of H( ) f X 27

42 Determining Unknown f l s: VI tapering also useful for L = 1: because E{Ŝ(p) (f)} = σ 2 t + D2 1 4 [F(f + f 1) + F(f f 1 )], can get interference between two Fejér s kernels, which can be lessened by tapering example: 50 realizations of X t = cos (2πf 1 t t + π/4) + t, t = 0,..., 100, where { t } is zero mean Gaussian white noise with variance σ 2 = (yields SNR of 37 db); f 1 = 7.25 Hz t = 0.01 seconds (yields f N = 50 Hz) following plot shows peak frequency from Ŝ(d) ( ) with NW = 2 Slepian versus peak frequency from periodogram Ŝ(p) ( ) mean square error for periodogram-based estimator dominated by bias, so tapering useful X 28

43 peak frequency (Hz) Peak Frequencies from Ŝ(d) ( ) versus from Ŝ(p) ( ) peak frequency (Hz) X 29

44 Tests for Periodicity: I consider discrete spectrum first: LX D 2 S X (f) = l 4 [δ(f f l) + δ(f + f l )] + σ 2 t ; l=1 i.e., white noise background will assume Gaussian under null hypothesis D 1 = = D L = 0, X t is just Gaussian white noise under alternative hypothesis that D l > 0 for at least one f l, periodogram should be large at f l following plots show periodograms under the null hypothesis and illustrate the need for statistical tests X 30

45 periodogram Periodogram for White Noise and True SDF N= f X 31

46 periodogram Periodogram for White Noise and True SDF N= f X 31

47 Tests for Periodicity: II assume for convenience N odd so N = 2m + 1 (not restrictive: easy to adjust for even N) under null hypothesis, Section 6.6 says 2Ŝ(p) (f k ) σ 2 t d = χ 2 2 ; independent RVs over f k k/n t, k = 1,..., m PDF for χ 2 2 given by f(u) = e u/2 /2, 0 u < can thus write " # P 2Ŝ(p) (f k ) σ 2 u 0 t = P h i χ 2 2 u 0 = Z u0 0 f(u) du = 1 e u 0/2 X 32

48 let Schuster s Test (1898) γ = max 1 k m distribution for γ given by following: 2Ŝ(p) (f k ) σ 2 t P [γ > u 0 ] = 1 P [γ " u 0 ] # = 1 P 2Ŝ(p) (f k ) σ 2 u 0 for 1 k m t = 1 1 e u m 0/2 α implies u 0 = 2 log(1 (1 α) 1/m ) under alternative hypothesis, γ tends to be large, so reject null hypothesis if γ > u 0 for α level test (typically α = 0.05 or 0.01) X 33

49 Fisher s Test (1929): I critique of Schuster s test: need to know σ 2 Fisher (1929) proposed test based upon test statistic g max 1 k m Ŝ(p) (f k ) P mj=1 Ŝ (p) (f j ) (note: σ 2 factors out of ratio defining g) because N is odd, have mx Ŝ (p) (f j ) = t 2 j=1 N 1 X (X t X) 2, t=0 which is proportional to estimator of σ 2 under null hypothesis X 34

50 Fisher s Test (1929): II exact distribution of g given by MX µ P [g > g 0 ] = ( 1) j 1 m (1 jg j 0 ) m 1 m(1 g 0 ) m 1, j=1 where M is largest integer min {1/g 0, m} hence P [g > g 0 ] α m(1 g 0 ) m 1 = g 0 1 (α/m) 1/(m 1) g F g F actually satisfies P [g > g F ] < α; i.e., g F corresponds to a critical level < α g F is within 1% of g 0 for < α level test, reject null hypothesis if g > g F critique: best for alternative hypothesis with L = 1 X 35

51 Siegel s Test (1980): I Siegel (1980) proposed using all large values of rescaled periodogram S (p) Ŝ (p) (f (f k ) k ) P mj=1 Ŝ (p) (f j ) rather than just largest value (g = max k S(p) (f k )) with 0 < λ 1 Siegel s test statistic is mx T λ S(p) (f k ) λg F k=1 where (a) + = max (a, 0) (i.e., positive part ) if λ = 1, then P [T 1 > 0] = P [g > g F ] +, if λ < 1, test uses all large values of S (p) (f k ) X 36

52 Siegel s Test (1980): II Siegel (1979) gives formula to get t λ such that P[T λ > t λ ] = α for α level test, reject null hypothesis if T λ > t λ with λ = 0.6, Siegel found T 0.6 slightly less powerful than g vs. L = 1 alternative, but outperformed g vs. L = 2, 3 alternatives Siegel (1980) gives table for critical values t 0.6 for selected m for other m s, can use interpolation formulae t 0.6 = 1.033m , t 0.6 = m for, respectively, α = 0.05 and 0.01 d can also get critical values for large m via T λ = cχ 2 0 (β), which is a noncentral chi-square RV with zero DOFs (?!), where c and β are set via moment matching X 37

53 Example Ocean Noise Data: I k X 38

54 Example Ocean Noise Data: II sample size (N = 128) is even, so just use S (p) (f k ) such that 0 < f k < f N, yielding m = 63 such Fourier frequencies following plot shows rescaled periodogram (circles) consider Fisher s g test at α = 0.05 g F = shown by upper horizontal line since g = max k S(p) (f k ) = exceeds g F, can reject null hypothesis of white noise consider Siegel s test at α = 0.05 lower thin line shows 0.6g F (lower horizontal line) T 0.6 = = sum of 3 positive excesses since T 0.6 > t 0.6 = , can again reject null hypothesis of white noise X 39

55 rescaled periodogram Rescaled Periodogram for Ocean Noise Data f (Hz) X 40

56 Example Ocean Noise Data: II at α = 0.01, g = > g F = , so reject (barely!) T 0.6 = < t 0.6 = , so fail to reject (barely!) suggests one sinuosoid (with period 5 seconds) and possibly another (with period 3.7 seconds) contrast with cumulative periodogram test X 41

57 Thomson s F Test for Periodicity: I previous tests assume null hypothesis of white noise given X 0,..., X N 1, can test null hypothesis X t = η t vs. X t = D 1 cos (2πf 1 t + φ 1 ) + η t via multitaper-based test statistic (Thomson, 1982), where D 1 and φ 1 are unknown constants; f 1 is known; {η t } is Gaussian with zero mean and SDF S η ( ), but need not be white noise with C 1 D 1 e iφ 1/2, can write E{X t } = D 1 cos (2πf 1 t + φ 1 ) = C 1 e i2πf 1t + C 1 e i2πf 1t ; let Ŝ(mt) k ( ) be kth eigenspectrum formed using kth order Slepian taper {h k,t } H k ( ) X 42

58 Thomson s F Test for Periodicity: II since X t = E{X t } + η t, consider J k (f) = = N 1 X t=0 N 1 X t=0 N 1 X t=0 h k,t X t e i2πft (noting that J k (f) 2 = Ŝ(mt) k (f)) h k,t E{X t }e i2πft + k with k N 1 X t=0 h k,t C 1 e i2πf 1t + C 1 e i2πf 1t e i2πft + k N 1 X = C 1 h k,t e i2π(f f1)t + C1 t=0 N 1 X t=0 = C 1 H k (f f 1 ) + C 1 H k(f + f 1 ) + k h k,t η t e i2πft h k,t e i2π(f+f 1)t + k X 43

59 considering Thomson s F Test for Periodicity: III J k (f) = C 1 H k (f f 1 ) + C 1 H k(f + f 1 ) + k at f = f 1, we have, for k = 0,..., K 1, J k (f 1 ) {z } depend. can argue that k s = {z} C 1 H {z k (0) } + C1 H k(2f 1 ) {z } param. independ. 0 if 2f 1 > W + k {z} error have mean 0 & variance σ 2 = E{ k 2 } S η (f 1 ) are approximately uncorrelated and complex Gaussian if we drop C 1 H k(2f 1 ) term (should be small if 2f 1 > W ), J k (f 1 ) = C 1 H k (0) + k is complex-valued regression model H k (0) = P N 1 t=0 h k,t = 0, odd k; real-valued, all k X 44

60 Thomson s F Test for Periodicity: IV estimate C 1 as Ĉ1 minimizing SS(Ĉ1) K 1 X k=0 yields estimator P K 1 Ĉ 1 = k=0 J k(f 1 )H k (0) P K 1 k=0 H2 k (0) = Ø ØJ k (f 1 ) Ĉ1H k (0) Ø Ø 2 P K 1 k=0,2,... J k(f 1 )H k (0) P K 1 k=0,2,... H2 k (0) X 45

61 Thomson s F Test for Periodicity: V complex-valued least squares theory says Ĉ1 complex Gaussian, mean = C 1, var. = σ 2 / P K 1 estimator of σ2 is ˆσ2 SS(Ĉ1)/K under null hypothesis (i.e., C 1 = 0), Ø 2 ØĈ1Ø 2 P K 1 k=0 H2 k (0) d A = χ 2 2 σ 2 B 2K ˆσ2 σ 2 further, A & B are independent, so Ø A/2 (K 1) R B/(2K 2) = ØĈ1Ø 2 P K 1 K ˆσ 2 X 46 d = χ 2 2K 2 k=0 H2 k (0) k=0 H2 k (0) d = F 2,2K 2

62 Thomson s F Test for Periodicity: VI for α level test, reject null hypothesis if with ν 2K 2 R > ν(1 α 2/ν )/(2α 2/ν ) RHS is (1 α) 100% percentage point of F 2,2K 2 dist. if reject null hypothesis, can estimate S η (f) for f 1 W f f 1 + W using reshaped SDF: Ŝ η (f) = 1 K K 1 X k=0 Ø ØJ k (f) Ĉ1H k (f f 1 ) at f = f 1, have Ŝη(f 1 ) = SS(Ĉ1)/K = ˆσ 2 integrated spectrum has steps of Ĉ1 2 at ±f 1 Ø Ø 2 X 47

63 Thomson s F Test for Periodicity: VII note: similar argument at f 1 = 0 yields P K 1 ˆµ k=0 J k(0)h k (0) P K 1 k=0 H2 k (0) as estimator of µ = E{X t } for K = 1, ˆµ reduces to µ = P t h t,0x t / P t h t,0 can reshape SDF estimator for f W X 48

64 Completing a Harmonic Analysis: I suppose tests support model: X t = D 1 cos (2π ˆf 1 t t + φ 1 ) + t = A 1 cos (2π ˆf 1 t t ) + B 1 sin (2π ˆf 1 t t ) + t ˆf 1 frequency estimated in some manner { t } is white noise with mean 0, variance σ 2 note: L > 1 case handled in analogous way can complete analysis using one of these: 1. approximate conditional least squares (i.e., conditional on estimated value of ˆf 1 ) 2. exact conditional least squares 3. exact unconditional least squares 4. conditional frequency domain regression X 49

65 Approximate Conditional Least Squares conditional on value of ˆf 1, can estimate A 1 & B 1 via  1 = 2 N N 1 X t=0 X t cos (2π ˆf 1 t t ) & ˆB 1 = 2 N N 1 X t=0 approximate LS (exact only if ˆf 1 is a Fourier freq.) X t sin (2π ˆf 1 t t ) integrated spectrum has steps at ± ˆf 1 of size Â2 1 + ˆB = Ŝ(p) ( ˆf 1 ) N t white noise variance σ 2 estimated by ˆσ 2 SS(Â1, ˆB 1, ˆf 1 ), where SS(Â1, N 2 ˆB 1, ˆf 1 ) N 1 X t=0 is based upon the residual process ˆR t X t [Â1 cos (2π ˆf 1 t t ) + ˆB 1 sin (2π ˆf 1 t t )] ˆR 2 t X 50

66 Exact Conditional Least Squares conditional on value of ˆf1, estimate A 1 & B 1 by minimizing SS(A 1, B 1, ˆf 1 ) estimators Ã1 & B 1 obtained by regressing X t on cos (2π ˆf 1 t t ) and sin (2π ˆf 1 t t ) Ã1 = Â1 & B 1 = ˆB 1 if ˆf 1 a Fourier frequency integrated spectrum has steps at ± ˆf 1 of size Ã2 1 + B white noise variance σ 2 estimated by σ 2 SS(Ã1, B 1, ˆf 1 ) N 2 define residual process { R t } via R t X t [Ã1 cos (2π ˆf 1 t t ) + B 1 sin (2π ˆf 1 t t )] X 51

67 Exact Unconditional Least Squares now regard ˆf 1 as a preliminary estimate only, and estimate A 1, B 1 & f 1 via values Ă1, B1 & f 1 that minimize SS(A 1, B 1, f 1 ), which is now a nonlinear regression integrated spectrum has steps at ± f 1 of size Ă2 1 + B white noise variance σ 2 estimated by σ 2 SS(Ă1, B 1, f 1 ) N 3 define residual process { R t } via R t X t [Ă1 cos (2π f 1 t t ) + B 1 sin (2π f 1 t t )] Bloomfield (2000) discusses technique in detail, presenting cyclic descent algorithm to get Ă1, B 1 & f 1 and giving a pathological example where these outperform conditional LS estimators X 52

68 Frequency Domain Regression conditional on ˆf 1, use C 1 D 1 e iφ 1/2 to write X t = D 1 cos (2π ˆf 1 t t +φ 1 )+ t = C 1 e i2πf 1t t +C 1 e i2πf 1t t + t estimate C 1 using multitaper approach: Ĉ 1 = 1/2 P K 1 t k=0,2,... J k( ˆf 1 )H k (0) ± P K 1 k=0,2,... H2 k (0) integrated spectrum has steps at ± ˆf 1 of size Ĉ1 2 white noise variance σ 2 estimated by ˆσ 2 (mt) = 1 N 2 N 1 X t=0 X t [Ĉ1e i2π ˆf 1 t t + Ĉ 1 e i2π ˆf l t t ] (mt) define residual process { ˆR t } via ˆR (mt) t X t [Ĉ1e i2π ˆf 1 t t + Ĉ 1 e i2π ˆf l t t ] 2 X 53

69 Completing a Harmonic Analysis: II estimated SDF given by Š(f) Ď2 1 δ(f ˇf1 ) + δ(f + 4 ˇf 1 ) + ˇσ 2 t where (Ď2 1, ˇf 1, ˇσ ) 2 is either (Â2 1 + ˆB 2 1, ˆf 1, ˆσ 2 ) (Ã2 1 + B 2 1, ˆf 1, σ 2 ) (Ă2 1 + B 2 1, f 1, σ 2 ) or (4 Ĉ1 2, ˆf 1, ˆσ 2 (mt) ) check assumptions by examining residuals should be close to white noise but... will have deficiency of power at ˇf 1 X 54

70 Completing a Harmonic Analysis: III if background not white, use model X t = A 1 cos (2π ˆf 1 t t ) + B 1 sin (2π ˆf 1 t t ) + η t as before, ˆf1 frequency estimated in some manner {η t } has mean 0 and SDF S η ( ) can complete harmonic part of analysis as before: 1. approximate conditional least squares 2. exact conditional least squares 3. exact unconditional least squares 4. conditional frequency domain regression can estimate S η ( ) using either residuals (but will be biased around ˆf 1 ) or reshaped SDF if #4 used X 55

71 log(flow) River Flow Data: I time (years) plot shows log of average monthly water flow in Willamette River at Salem, Oregon, from October 1950 to August 1983; N = 395 and t = 1/12 year so f N = 6 cycles per year (looked at a part of this series in Chapter 1) X 56

72 River Flow Data: II will consider model of sinuoids + white noise : LX X t = D l cos(2πf l t t + φ l ) + t l=1 following plot shows periodogram Ŝ(p) ( ) of X t s padded X t s with = 629 zeros yields grid > 2 times as fine as Fourier frequencies Q: why is Ŝ(p) ( ) largest near zero frequency? A: cannot assume µ E{X t } = 0 here! letting X 0 t X t X with X. = 9.83, rewrite model as L Xt 0 = X D l cos(2πf l t t + φ l ) + t l=1 X 57

73 periodogram (db) Ŝ (p) ( f k ) versus f k = k 2N t for River Flow Data f (cycles/year) X 58

74 River Flow Data: III following plot shows periodogram for {X t } s again, padded Xt 0 s with 629 zeros prominent low frequency component that was in Ŝ(p) ( ) for {X t } is now gone largest value now at at f = cycles/year ( f = 0.012, so close to annual periodicity) 2nd peak at f = cycles/year X 59

75 periodogram (db) Ŝ (p) ( f k ) versus f k = k 2N t for Centered Data {X 0 t } f (cycles/year) X 60

76 River Flow Data: IV to explain 2nd peak, suppose g( ) has annual period (T = 1): g(t) (52a) = a X l=1 a l cos(2πlt) + b l sin(2πlt), where t is a continuous variable (measured in years) with f 0 l l cycles/year, can rewrite above as g(t) = a X l=1 a l cos(2πf 0 l t) + b l sin(2πf 0 l t) f1 0 = 1 cycle/year called fundamental frequency for l > 1, fl 0 = lf 1 0 called (l 1)th harmonic thus f2 0 = 2f 1 0 = 2 cycles/year is first harmonic X 61

77 if we sample g(t) = a X River Flow Data: V l=1 a l cos(2πf 0 l t) + b l sin(2πf 0 l t) at t = 1/12 so f N = 6 cycles/year, we get (due to aliasing) 6X g t g(t t ) = µ + A 0 l cos(2πf l 0 t) + B0 l sin(2πf l 0 t) l=1 conclusion: sequence {g t } with period of 12 can be written as constant + fundamental + 5 harmonics, so peak at 2 cycles per year says annual periodicity not described by single sinusoid following plot of Slepian NW = 4 t Ŝ (d) ( ) says Ŝ(p) ( ) free of leakage (thus higher harmonics not being masked by leakage) X 62

78 direct SDF estimate (db) Ŝ (d) ( f k ) versus f k = k 2N t for Centered Data {X 0 t } f (cycles/year) X 63

79 periodogram (db) Ŝ (p) ( f k ) versus f k = k 2N t for Centered Data {X 0 t } f (cycles/year) X 63

80 SDF estimates (db) Ŝ (d) ( f k ) & Ŝ(p) ( f k ) vs. f k = k 2N t for {X 0 t } f (cycles/year) X 63

81 River Flow Data: VI Q: is L = 1 term model adequate? following plot shows fitted model using ˆf 1 = and corresponding residuals R (1) t ˆf 1 determined from periodogram physical argument would suggest use of f 1 = 1 approximate conditional LS method shown here ˆσ 2 = 0.22 (compare to ˆσ 2 X 0 = 0.62) two outliers? 1st associated with time shift in minimum 2nd associated with missing peak (draught?) X 64

82 residuals fitted model L = 1 Term Fitted Model and Residuals time (years) X 65

83 River Flow Data: VII if model adequate, R (1) t s should be close to white following plot shows periodogram for {R (1) t } comparison with periodogram for {Xt 0 } shows elimination of component at 1 cycle per year, with remainder of spectrum largely the same peak at 2 cycles/year, but is it significant? Fisher s g = exceeds α = 0.01 critical level 0.049, so reject white noise hypothesis warning: background continuum not flat, so use g with caution here X 66

84 periodogram (db) Ŝ (p) ( f k ) versus f k = 2N k for Residuals {R (1) t t } f (cycles/year) X 67

85 periodogram (db) Ŝ (p) ( f k ) versus f k = k 2N t for Centered Data {X 0 t } f (cycles/year) X 67

86 periodograms (db) Ŝ (p) ( f k ) versus f k = 2N k for {X 0 t t } and {R(1) t } f (cycles/year) X 67

87 River Flow Data: VIII now consider L = 2 model by adding ˆf 2 = term following plot shows fitted model and residuals {R (2) t } approximate conditional LS method shown here little visual difference from L = 1 term model! ˆσ 2 = 0.20 (10% decrease over L = 1 model) periodogram for {R (2) t } looks featureless, but not particularly flat, indicating white noise assumption for { t } might be questionable cumulative periodogram test on {R (2) t } confirms our suspicions: can reject white noise hypothesis at level of significance α = 0.05 X 68

88 residuals fitted model L = 2 Term Fitted Model and Residuals time (years) X 69

89 residuals fitted model L = 1 Term Fitted Model and Residuals time (years) X 69

90 periodogram (db) Ŝ (p) ( f k ) versus f k = 2N k for Residuals {R (2) t t } f (cycles/year) X 70

91 periodogram (db) Ŝ (p) ( f k ) versus f k = 2N k for Residuals {R (1) t t } f (cycles/year) X 70

92 periodogram (db) Ŝ (p) ( f k ) versus f k = 2N k for {R (1) t t } and {R (2) t } f (cycles/year) X 70

93 cumulative periodogram Cumulative Periodogram White Noise Test for {R (2) t } f X 71

94 River Flow Data: IX will now consider model L Xt 0 = X D l cos(2πf l t t + φ l ) + η t, l=1 where {η t } is stationary with zero mean and SDF S η ( ) know L = 2 adequate, but let s exercise Thomson s F -test following plot shows F -test based on NW = 4/ t and K = 5 2K 2 = 8 so compare to F 2,8 percentage points upper & lower dashed lines are α = & 0.01 points rule of thumb: use α = 1/N = , yielding a critical level of 13.9 (dotted line) test picks out 1 & 2 cycles/year as significant X 72

95 F test Thomson s F -test for River Flow Data f (cycles/year) X 73

96 River Flow Data: X complex-valued regression yields fit quite similar to ACLS following plot shows multitaper and reshaped SDF estimates horizontal line shows 2W = 0.24 SDF reshaped in bands about 1 and 2 cycles/year final SDF: δ functions at 1 and 2 cycles/year with continuum estimated by smoothed reshaped SDF X 74

97 estimated SDFs (db) Multitaper and Reshaped SDF Estimates f (cycles/year) X 75

98 Parametric Harmonic Analysis: I AR parametric SDF estimation defined for purely continuous spectra but has been applied to mixed spectra rationale: can write deterministic real sinusoid as x t = D cos (2πft t + φ) = 2 cos (2πf t ) x t 1 x t 2 with initial conditions x 0 = D cos (φ) and x 1 = D cos (2πf t + φ) now suppose φ uniformly distributed over [ π, π]: X t = D cos (2πft t + φ), i.e., harmonic process with a purely discrete spectrum X 76

99 Parametric Harmonic Analysis: II can rewrite X t = D cos (2πft t + φ) as X t = ϕ 2,1 X t 1 + ϕ 2,2 X t 2 with ϕ 2,1 2 cos (2πf t ) and ϕ 2,2 = 1 can regard as pseudo-ar(2) process with t term set to zero Levinson Durbin recursions say σ 2 2 = (1 ϕ2 2,2 )σ2 1 = 0 roots of 1 ϕ 2,1 z 1 ϕ 2,2 z 2 = 0 are exp(±i2πf t ), which lie on the unit circle frequency related to argument of roots X 77

100 Parametric Harmonic Analysis: III result generalizes: harmonic process px X t = D l cos (2πf l t t + φ l ) l=1 can be expressed as pseudo-ar(2p) process X t = 2pX k=1 ϕ 2p,k X t k with roots {z j } of polynomial equation 1 ϕ 2p,1 z 1 ϕ 2p,2 z 2 ϕ 2p,2p z 2p = 0 all on unit circle: z j = exp (±i2πf j t ), j = 1,..., p X 78

101 Parametric Harmonic Analysis: IV if {α t } is white noise with 0 mean, then ex t X t + α t = 2pX k=1 ϕ 2p,k X t k + α t is stationary process with discrete spectrum substitution of X t k = e X t k α t k on RHS yields ex t 2pX k=1 ϕ 2p,k e Xt k = α t i.e., in form, an ARMA(2p, 2p) process 2pX k=1 ϕ 2p,k α t k, note that AR and MA coefficients identical Pisarenko (1973) discusses estimation of ϕ 2p,k s X 79

102 Parametric Harmonic Analysis: V Q: do usual AR estimators work OK with harmonic processes? consider L = 1 with f = 1/6 so x t = x t 1 x t 2 set initial conditions to be x 0 = 0 and x 1 = 1 given {x 1, x 2, x 3, x 4 } = {1, 1, 0, 1}, get φ 2,1 = 1 & φ 2,2 = 1 using Burg and F/B LS σ 2 = 0 (reasonable: t = 0 for all t) φ 2,1 = 1/2 and φ 2,2 = 1/2 using Yule Walker X 80

103 Parametric Harmonic Analysis: VI reconsider 50 realizations of X t = cos (2πf 1 t t + π/4) + t, t = 0,..., 100, where { t } is zero mean Gaussian white noise with variance σ 2 = (yields SNR of 37 db); f 1 = 7.25 Hz t = 0.01 seconds (yields f N = 50 Hz) following plot shows peak frequency from Ŝ(d) ( ) with NW = 2 Slepian versus peak frequency from F/B LS AR(24) SDF estimate variability less for F/B LS estimator note: Burg performs very poorly here! X 81

104 peak frequency (Hz) Peak Frequencies from Ŝ(d) ( ) versus from AR SDF peak frequency (Hz) X 82

105 Burg AR(24) SDF estimates (db) Five F/B Least Squares AR(24) SDF Estimates f (Hz) X 83

106 Burg AR(24) SDF estimates (db) Five Burg AR(24) SDF Estimates f (Hz) X 84

107 Problems with Parametric Approach two problems with AR-based harmonic analysis peak locations can depend on phase of sinusoid spontaneous line splitting (Yule Walker & Burg) problems not recognized in early ME literature consider X t = D cos (2πt/T π/2), which has period T suppose N 0.58T with N large; i.e., lots of data, but covering less than full period Toman (1965) & Jackson (1967): peak of Ŝ(p) ( ) is at f = 0 Ulyrch (1972a, b) claimed Burg prevents this spectral shift subsequent research not so optimistic: Table 600 summarizes relevant literature F/B LS evidently free from line splitting X 85

108 FPE(p) River Flow Data: XI p (order of AR model) fit AR models of orders p = 1,..., 150 using Burg; above plot shows FPE(p) versus p, which is minimized at p = 38, but note that FPE(27) is quite close to FPE(38) X 86

109 Burg AR(150) SDF estimate (db) Burg AR(150) SDF Estimate of River Flow Data f (cycles/year) X 87

110 Burg AR(38) SDF estimate (db) Burg AR(38) SDF Estimate of River Flow Data f (cycles/year) X 88

111 Burg AR(27) SDF estimate (db) Burg AR(27) SDF Estimate of River Flow Data f (cycles/year) X 89

112 River Flow Data: XII for AR SDFs and Ŝ(p) ( ), Table XII 91 lists estimated ˆf 1 and ˆf 2, peak values peak widths (3 db down point) ˆf 2 estimate for AR(27) somewhat off (implies p too small can lead to loss of accuracy) widths of AR peaks decrease as p increases AR ˆf 1 peak widths 2 to 6 < Ŝ(p) ( ) width only AR(150) ˆf 2 peak width < Ŝ(p) ( ) width if f = 1 and 2 are regarded as truth, evidently a narrow peak does not imply better accuracy X 90

113 Table XII 91 3 db 3 db p ˆf1 width Ŝ( ˆf 1 ) ˆf2 width Ŝ( ˆf 2 ) db db db db db db Ŝ (p) ( ) db db X 91

114 River Flow Data: XIII widths of periodogram peaks very similar (dictated by central lobe of Fejér s kernel) widths of AR peaks at ˆf 1 and ˆf 2 for given p differ by at least factor of 2 AR peak widths not related to anything like Fejér peaks in AR(27) SDF about 10 db < peaks in AR(150) cannot relate peak heights to amplitude; for Ŝ(p) ( ), have E{Ŝ(p) (f l )} = σ 2 t + N t D 2 l 4 AR(27) & AR(38) considerably smoother looking than AR(150) and Ŝ(p) ( ) X 92

115 integrated spectrum River Flow Data: XIV above illustrates difficulty in determining how much of var {X t } to assign to sinusoids from integrated AR(27) spectrum (Burg, 1985); is amount determined by Ŝ(p) ( ) estimate f X 93

116 Use of SVD in Harmonic Analysis: I have argued that we can write px X t = D l cos(2πf l t t + φ l ) as X t = l=1 similarly can argue that we can write px Z t Dl 0 ei(2πf lt t +φ l ) as Z t = l=1 2pX k=1 ϕ 2p,k X t k px ϕ p,k Zt k, k=1 where { Z t } is complex-valued harmonic process with a purely discrete spectrum (ϕ p,k are in general complex-valued) in practice Z t Z t + t is of interest, where { t } is complexvalued white noise (can arises via complex demodulation combined with low-pass filtering or via Hilbert transform) X 94

117 Use of SVD in Harmonic Analysis: II given Z 0,..., Z N 1, want to determine f l s f l s can be determined from polynomial equation px z p ϕ p,k z p k = 0 k=1 because roots z j of this equation are of the form z j e i2πf j t, j = 1,..., p note: roots all lie on the unit circle (i.e., z j = 1) X 95

118 Use of SVD in Harmonic Analysis: III Tufts & Kumaresan (1982) proposed the following: choose p 0 such that p 0 > p estimate ϕ p 0,k s via forward/backward least squares (will need to use complex-valued version of F/B LS) form polynomial equation with estimated ϕ p 0,k s find roots ẑ j of equation close to unit circle (if all p 0 are such, p 0 not set high enough) f j estimated by arg(ẑ j )/(2π t ) (i.e., use polar representation ẑ j = ẑ j e i arg(ẑ j) ) X 96

119 Use of SVD in Harmonic Analysis: IV let ϕ be vector containing ϕ p 0,k s given Z 0,..., Z N 1, F/B LS estimator of ϕ p 0,k minimizes N 1 X Ø Z Xp 0 2 N p t ϕ p 0,k Z X 0 1 p 0 t k + Ø Ø Z t X t=p 0 k=1 t=0 k=1 (need complex conjugate in backward predictions) ϕ p 0,k Z t+k Ø 2 X 97

120 Use of SVD in Harmonic Analysis: V leads to complex-valued least squares problem: Z = Aϕ + where is vector of 2(N p 0 ) error terms and Z p 0. Z Z N 1 Z & A 0. Z N p 0 1 (note that A is 2(N p 0 ) p 0 matrix) Z p 0 1 Z p Z Z N 2 Z N 3... Z N p 0 1 Z 1 Z 2... Z p Z N p 0 Z N p Z N 1 X 98

121 Use of SVD in Harmonic Analysis: VI least squares estimator satisfies normal equations: A H Aϕ = A H Z H here denotes Hermitian transpose note that A H A is p 0 p 0 SVD approach arises by considering noise-free case if N is large enough, rank of A H A is p Z t is sum of p complex exponentials if t = 0 last p 0 p columns of A = linear combination of first p columns by assumption p < p 0, so A H A is not full rank X 99

122 Use of SVD in Harmonic Analysis: VII can find solution to A H Aϕ = A H Z via singular value decomposition (SVD): A = UΛV H, where U is 2(N p 0 ) p matrix: U H U = I p Λ is p p diagonal matrix (diagonals λ j 6= 0) V is p 0 p matrix: V H V = I p leads to notion of generalized inverse for A H A: (A H A) # V Λ 2 V H X 100

123 Use of SVD in Harmonic Analysis: VIII generalized inverse always exists; in full rank case (A H A) # = (A H A) 1 can solve A H Aϕ = A H Z using ϕ (A H A) # A H Z to see that ϕ is a solution, note that A H A ϕ = (UΛV H ) H (UΛV H ) ϕ = V ΛU H UΛV H ϕ = V Λ 2 V H ϕ = V Λ 2 V H (A H A) # A H Z = V Λ 2 V H V Λ 2 V H A H Z = V V H A H Z = V V H V ΛU H Z = V ΛU H Z = A H Z as required (recall U H U = I p & V H V = I p ) X 101

124 Use of SVD in Harmonic Analysis: IX when t 6= 0, A generally has rank p 0 (i.e., columns of A not longer linearly dependent) SVD now yields where now A H A = V Λ 2 V H U is 2(N p 0 ) p 0 matrix: U H U = I p 0 Λ is p 0 p 0 diagonal matrix (diagonals λ j 6= 0) V is p 0 p 0 matrix: V H V = I p 0 if signal to noise ratio large, should have p large λ 2 j s ( signal subspace ) p 0 p small λ 2 j s ( noise subspace ) X 102

125 Use of SVD in Harmonic Analysis: X SVD can thus help identify p use SVD to compute F/B LS solution ϕ using ϕ, form polynomial and factor should see p roots close to unit circle X 103

126 river flow data is real-valued River Flow Data: XV can create series amenable to SVD approach via Z t = X t + iht {X t }, where HT {X t } is Hilbert transform of {X t }: formed by filtering {X t } with {g l } having ( i, 0 f < 1/2; G(f) i, 1/2 f < 0 {Z t } is complex-valued stationary process with ( 4S S Z (f) = X (f), 0 f 1 2 ; 0, 1 2 f < 0 X 104

127 River Flow Data: XVI impulse response sequence for G( ) is g t ( 2πt, t odd; 0, t even in practice, must use approximate {g l } (51 term approximation used for this example) leads to Z t of sample size N Z = 200 spectral content real-valued X t (N = 395) X 105

128 imaginary part real part Hilbert-Transformed River Flow Data time (years) X 106

129 River Flow Data: XVII with p 0 = 25, obtain results shown in following plot two roots quite close to unit circle frequencies are and cycles/year (agrees well with previous analyses) X 107

130 Plot of Roots in SVD Analysis of River Flow Data 90 o * * 180 o 0 o 270 o X 108

Figure 18: Top row: example of a purely continuous spectrum (left) and one realization

Figure 18: Top row: example of a purely continuous spectrum (left) and one realization 1..5 S(). -.2 -.5 -.25..25.5 64 128 64 128 16 32 requency time time Lag 1..5 S(). -.5-1. -.5 -.1.1.5 64 128 64 128 16 32 requency time time Lag Figure 18: Top row: example o a purely continuous spectrum

More information

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment:

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment: Stochastic Processes: I consider bowl of worms model for oscilloscope experiment: SAPAscope 2.0 / 0 1 RESET SAPA2e 22, 23 II 1 stochastic process is: Stochastic Processes: II informally: bowl + drawing

More information

A Tutorial on Stochastic Models and Statistical Analysis for Frequency Stability Measurements

A Tutorial on Stochastic Models and Statistical Analysis for Frequency Stability Measurements A Tutorial on Stochastic Models and Statistical Analysis for Frequency Stability Measurements Don Percival Applied Physics Lab, University of Washington, Seattle overheads for talk available at http://staff.washington.edu/dbp/talks.html

More information

SOLUTIONS BSc and MSci EXAMINATIONS (MATHEMATICS) MAY JUNE 2002 This paper is also taken for the relevant examination for the Associateship.

SOLUTIONS BSc and MSci EXAMINATIONS (MATHEMATICS) MAY JUNE 2002 This paper is also taken for the relevant examination for the Associateship. UNIVERSITY OF LONDON IMPERIAL COLLEGE OF SCIENCE, TECHNOLOGY AND MEDICINE SOLUTIONS BSc and MSci EXAMINATIONS (MATHEMATICS) MAY JUNE 00 This paper is also taken for the relevant examination for the Associateship

More information

Computational Data Analysis!

Computational Data Analysis! 12.714 Computational Data Analysis! Alan Chave (alan@whoi.edu)! Thomas Herring (tah@mit.edu),! http://geoweb.mit.edu/~tah/12.714! Introduction to Spectral Analysis! Topics Today! Aspects of Time series

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Linear models. Chapter Overview. Linear process: A process {X n } is a linear process if it has the representation.

Linear models. Chapter Overview. Linear process: A process {X n } is a linear process if it has the representation. Chapter 2 Linear models 2.1 Overview Linear process: A process {X n } is a linear process if it has the representation X n = b j ɛ n j j=0 for all n, where ɛ n N(0, σ 2 ) (Gaussian distributed with zero

More information

ARMA Models: I VIII 1

ARMA Models: I VIII 1 ARMA Models: I autoregressive moving-average (ARMA) processes play a key role in time series analysis for any positive integer p & any purely nondeterministic process {X t } with ACVF { X (h)}, there is

More information

Review of Concepts from Fourier & Filtering Theory. Fourier theory for finite sequences. convolution/filtering of infinite sequences filter cascades

Review of Concepts from Fourier & Filtering Theory. Fourier theory for finite sequences. convolution/filtering of infinite sequences filter cascades Review of Concepts from Fourier & Filtering Theory precise definition of DWT requires a few basic concepts from Fourier analysis and theory of linear filters will start with discussion/review of: basic

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

A Tutorial on Stochastic Models and Statistical Analysis for Frequency Stability Measurements

A Tutorial on Stochastic Models and Statistical Analysis for Frequency Stability Measurements A Tutorial on Stochastic Models and Statistical Analysis for Frequency Stability Measurements Don Percival Applied Physics Lab, University of Washington, Seattle overheads for talk available at http://staffwashingtonedu/dbp/talkshtml

More information

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1 DIGITAL SPECTRAL ANALYSIS WITH APPLICATIONS S.LAWRENCE MARPLE, JR. SUMMARY This new book provides a broad perspective of spectral estimation techniques and their implementation. It concerned with spectral

More information

(a)

(a) Chapter 8 Subspace Methods 8. Introduction Principal Component Analysis (PCA) is applied to the analysis of time series data. In this context we discuss measures of complexity and subspace methods for

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

Classical Decomposition Model Revisited: I

Classical Decomposition Model Revisited: I Classical Decomposition Model Revisited: I recall classical decomposition model for time series Y t, namely, Y t = m t + s t + W t, where m t is trend; s t is periodic with known period s (i.e., s t s

More information

Lecture 4 - Spectral Estimation

Lecture 4 - Spectral Estimation Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated

More information

Time Series. Anthony Davison. c

Time Series. Anthony Davison. c Series Anthony Davison c 2008 http://stat.epfl.ch Periodogram 76 Motivation............................................................ 77 Lutenizing hormone data..................................................

More information

An Introduction to Wavelets with Applications in Environmental Science

An Introduction to Wavelets with Applications in Environmental Science An Introduction to Wavelets with Applications in Environmental Science Don Percival Applied Physics Lab, University of Washington Data Analysis Products Division, MathSoft overheads for talk available

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang 1-1 Course Description Emphases Delivering concepts and Practice Programming Identification Methods using Matlab Class

More information

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA Applied Time Series Analysis Wayne A. Woodward Southern Methodist University Dallas, Texas, USA Henry L. Gray Southern Methodist University Dallas, Texas, USA Alan C. Elliott University of Texas Southwestern

More information

Fitting Linear Statistical Models to Data by Least Squares: Introduction

Fitting Linear Statistical Models to Data by Least Squares: Introduction Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math

More information

Examples of DWT & MODWT Analysis: Overview

Examples of DWT & MODWT Analysis: Overview Examples of DWT & MODWT Analysis: Overview look at DWT analysis of electrocardiogram (ECG) data discuss potential alignment problems with the DWT and how they are alleviated with the MODWT look at MODWT

More information

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise

More information

Centre for Mathematical Sciences HT 2017 Mathematical Statistics

Centre for Mathematical Sciences HT 2017 Mathematical Statistics Lund University Stationary stochastic processes Centre for Mathematical Sciences HT 2017 Mathematical Statistics Computer exercise 3 in Stationary stochastic processes, HT 17. The purpose of this exercise

More information

Statistics of stochastic processes

Statistics of stochastic processes Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014

More information

Wavelet Methods for Time Series Analysis. Part IV: Wavelet-Based Decorrelation of Time Series

Wavelet Methods for Time Series Analysis. Part IV: Wavelet-Based Decorrelation of Time Series Wavelet Methods for Time Series Analysis Part IV: Wavelet-Based Decorrelation of Time Series DWT well-suited for decorrelating certain time series, including ones generated from a fractionally differenced

More information

STAT Financial Time Series

STAT Financial Time Series STAT 6104 - Financial Time Series Chapter 4 - Estimation in the time Domain Chun Yip Yau (CUHK) STAT 6104:Financial Time Series 1 / 46 Agenda 1 Introduction 2 Moment Estimates 3 Autoregressive Models (AR

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

Time Series Examples Sheet

Time Series Examples Sheet Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,

More information

Linear Models 1. Isfahan University of Technology Fall Semester, 2014

Linear Models 1. Isfahan University of Technology Fall Semester, 2014 Linear Models 1 Isfahan University of Technology Fall Semester, 2014 References: [1] G. A. F., Seber and A. J. Lee (2003). Linear Regression Analysis (2nd ed.). Hoboken, NJ: Wiley. [2] A. C. Rencher and

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

7. Line Spectra Signal is assumed to consists of sinusoidals as

7. Line Spectra Signal is assumed to consists of sinusoidals as 7. Line Spectra Signal is assumed to consists of sinusoidals as n y(t) = α p e j(ω pt+ϕ p ) + e(t), p=1 (33a) where α p is amplitude, ω p [ π, π] is angular frequency and ϕ p initial phase. By using β

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University ENSC327 Communications Systems 2: Fourier Representations Jie Liang School of Engineering Science Simon Fraser University 1 Outline Chap 2.1 2.5: Signal Classifications Fourier Transform Dirac Delta Function

More information

3 Time Series Regression

3 Time Series Regression 3 Time Series Regression 3.1 Modelling Trend Using Regression Random Walk 2 0 2 4 6 8 Random Walk 0 2 4 6 8 0 10 20 30 40 50 60 (a) Time 0 10 20 30 40 50 60 (b) Time Random Walk 8 6 4 2 0 Random Walk 0

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

Unstable Oscillations!

Unstable Oscillations! Unstable Oscillations X( t ) = [ A 0 + A( t ) ] sin( ω t + Φ 0 + Φ( t ) ) Amplitude modulation: A( t ) Phase modulation: Φ( t ) S(ω) S(ω) Special case: C(ω) Unstable oscillation has a broader periodogram

More information

X random; interested in impact of X on Y. Time series analogue of regression.

X random; interested in impact of X on Y. Time series analogue of regression. Multiple time series Given: two series Y and X. Relationship between series? Possible approaches: X deterministic: regress Y on X via generalized least squares: arima.mle in SPlus or arima in R. We have

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Time Series Examples Sheet

Time Series Examples Sheet Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,

More information

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES This document is meant as a complement to Chapter 4 in the textbook, the aim being to get a basic understanding of spectral densities through

More information

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures FE661 - Statistical Methods for Financial Engineering 9. Model Selection Jitkomut Songsiri statistical models overview of model selection information criteria goodness-of-fit measures 9-1 Statistical models

More information

Comments on New Approaches in Period Analysis of Astronomical Time Series by Pavlos Protopapas (Or: A Pavlosian Response ) Don Percival

Comments on New Approaches in Period Analysis of Astronomical Time Series by Pavlos Protopapas (Or: A Pavlosian Response ) Don Percival Comments on New Approaches in Period Analysis of Astronomical Time Series by Pavlos Protopapas (Or: A Pavlosian Response ) Don Percival Applied Physics Laboratory Department of Statistics University of

More information

Automatic Autocorrelation and Spectral Analysis

Automatic Autocorrelation and Spectral Analysis Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution

More information

Chapter 3: Regression Methods for Trends

Chapter 3: Regression Methods for Trends Chapter 3: Regression Methods for Trends Time series exhibiting trends over time have a mean function that is some simple function (not necessarily constant) of time. The example random walk graph from

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Autoregressive Models Fourier Analysis Wavelets

Autoregressive Models Fourier Analysis Wavelets Autoregressive Models Fourier Analysis Wavelets BFR Flood w/10yr smooth Spectrum Annual Max: Day of Water year Flood magnitude vs. timing Jain & Lall, 2000 Blacksmith Fork, Hyrum, UT Analyses of Flood

More information

Square Root Raised Cosine Filter

Square Root Raised Cosine Filter Wireless Information Transmission System Lab. Square Root Raised Cosine Filter Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal design

More information

STOCHASTIC MODELING OF MONTHLY RAINFALL AT KOTA REGION

STOCHASTIC MODELING OF MONTHLY RAINFALL AT KOTA REGION STOCHASTIC MODELIG OF MOTHLY RAIFALL AT KOTA REGIO S. R. Bhakar, Raj Vir Singh, eeraj Chhajed and Anil Kumar Bansal Department of Soil and Water Engineering, CTAE, Udaipur, Rajasthan, India E-mail: srbhakar@rediffmail.com

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of

More information

Statistics of Stochastic Processes

Statistics of Stochastic Processes Prof. Dr. J. Franke All of Statistics 4.1 Statistics of Stochastic Processes discrete time: sequence of r.v...., X 1, X 0, X 1, X 2,... X t R d in general. Here: d = 1. continuous time: random function

More information

Introduction to Signal Processing

Introduction to Signal Processing to Signal Processing Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Intelligent Systems for Pattern Recognition Signals = Time series Definitions Motivations A sequence

More information

Lecture 11: Spectral Analysis

Lecture 11: Spectral Analysis Lecture 11: Spectral Analysis Methods For Estimating The Spectrum Walid Sharabati Purdue University Latest Update October 27, 2016 Professor Sharabati (Purdue University) Time Series Analysis October 27,

More information

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2.0 THEOREM OF WIENER- KHINTCHINE An important technique in the study of deterministic signals consists in using harmonic functions to gain the spectral

More information

SIO 221B, Rudnick adapted from Davis 1. 1 x lim. N x 2 n = 1 N. { x} 1 N. N x = 1 N. N x = 1 ( N N x ) x = 0 (3) = 1 x N 2

SIO 221B, Rudnick adapted from Davis 1. 1 x lim. N x 2 n = 1 N. { x} 1 N. N x = 1 N. N x = 1 ( N N x ) x = 0 (3) = 1 x N 2 SIO B, Rudnick adapted from Davis VII. Sampling errors We do not have access to the true statistics, so we must compute sample statistics. By this we mean that the number of realizations we average over

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Parameter estimation: ACVF of AR processes

Parameter estimation: ACVF of AR processes Parameter estimation: ACVF of AR processes Yule-Walker s for AR processes: a method of moments, i.e. µ = x and choose parameters so that γ(h) = ˆγ(h) (for h small ). 12 novembre 2013 1 / 8 Parameter estimation:

More information

w n = c k v n k (1.226) w n = c k v n k + d k w n k (1.227) Clearly non-recursive filters are a special case of recursive filters where M=0.

w n = c k v n k (1.226) w n = c k v n k + d k w n k (1.227) Clearly non-recursive filters are a special case of recursive filters where M=0. Random Data 79 1.13 Digital Filters There are two fundamental types of digital filters Non-recursive N w n = c k v n k (1.226) k= N and recursive N M w n = c k v n k + d k w n k (1.227) k= N k=1 Clearly

More information

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10 Digital Band-pass Modulation PROF. MICHAEL TSAI 211/11/1 Band-pass Signal Representation a t g t General form: 2πf c t + φ t g t = a t cos 2πf c t + φ t Envelope Phase Envelope is always non-negative,

More information

EE303: Communication Systems

EE303: Communication Systems EE303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Introductory Concepts Prof. A. Manikas (Imperial College) EE303: Introductory Concepts

More information

Empirical Mean and Variance!

Empirical Mean and Variance! Global Image Properties! Global image properties refer to an image as a whole rather than components. Computation of global image properties is often required for image enhancement, preceding image analysis.!

More information

DOA Estimation using MUSIC and Root MUSIC Methods

DOA Estimation using MUSIC and Root MUSIC Methods DOA Estimation using MUSIC and Root MUSIC Methods EE602 Statistical signal Processing 4/13/2009 Presented By: Chhavipreet Singh(Y515) Siddharth Sahoo(Y5827447) 2 Table of Contents 1 Introduction... 3 2

More information

Introduction to the Mathematics of Medical Imaging

Introduction to the Mathematics of Medical Imaging Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Statistics 349(02) Review Questions

Statistics 349(02) Review Questions Statistics 349(0) Review Questions I. Suppose that for N = 80 observations on the time series { : t T} the following statistics were calculated: _ x = 10.54 C(0) = 4.99 In addition the sample autocorrelation

More information

EE 435. Lecture 32. Spectral Performance Windowing

EE 435. Lecture 32. Spectral Performance Windowing EE 435 Lecture 32 Spectral Performance Windowing . Review from last lecture. Distortion Analysis T 0 T S THEOREM?: If N P is an integer and x(t) is band limited to f MAX, then 2 Am Χ mnp 1 0 m h N and

More information

DATA IN SERIES AND TIME I. Several different techniques depending on data and what one wants to do

DATA IN SERIES AND TIME I. Several different techniques depending on data and what one wants to do DATA IN SERIES AND TIME I Several different techniques depending on data and what one wants to do Data can be a series of events scaled to time or not scaled to time (scaled to space or just occurrence)

More information

Subspace-based Identification

Subspace-based Identification of Infinite-dimensional Multivariable Systems from Frequency-response Data Department of Electrical and Electronics Engineering Anadolu University, Eskişehir, Turkey October 12, 2008 Outline 1 2 3 4 Noise-free

More information

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis -- An Introduction -- AMS 586 Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data

More information

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall 2015 1 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given

More information

Nonparametric Function Estimation with Infinite-Order Kernels

Nonparametric Function Estimation with Infinite-Order Kernels Nonparametric Function Estimation with Infinite-Order Kernels Arthur Berg Department of Statistics, University of Florida March 15, 2008 Kernel Density Estimation (IID Case) Let X 1,..., X n iid density

More information

Fourier Analysis Linear transformations and lters. 3. Fourier Analysis. Alex Sheremet. April 11, 2007

Fourier Analysis Linear transformations and lters. 3. Fourier Analysis. Alex Sheremet. April 11, 2007 Stochastic processes review 3. Data Analysis Techniques in Oceanography OCP668 April, 27 Stochastic processes review Denition Fixed ζ = ζ : Function X (t) = X (t, ζ). Fixed t = t: Random Variable X (ζ)

More information

Parametric Method Based PSD Estimation using Gaussian Window

Parametric Method Based PSD Estimation using Gaussian Window International Journal of Engineering Trends and Technology (IJETT) Volume 29 Number 1 - November 215 Parametric Method Based PSD Estimation using Gaussian Window Pragati Sheel 1, Dr. Rajesh Mehra 2, Preeti

More information

Problem Sheet 1 Examples of Random Processes

Problem Sheet 1 Examples of Random Processes RANDOM'PROCESSES'AND'TIME'SERIES'ANALYSIS.'PART'II:'RANDOM'PROCESSES' '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''Problem'Sheets' Problem Sheet 1 Examples of Random Processes 1. Give

More information

Digital Signal Processing

Digital Signal Processing Digital Signal Processing Introduction Moslem Amiri, Václav Přenosil Embedded Systems Laboratory Faculty of Informatics, Masaryk University Brno, Czech Republic amiri@mail.muni.cz prenosil@fi.muni.cz February

More information

p(z)

p(z) Chapter Statistics. Introduction This lecture is a quick review of basic statistical concepts; probabilities, mean, variance, covariance, correlation, linear regression, probability density functions and

More information

Defining the Discrete Wavelet Transform (DWT)

Defining the Discrete Wavelet Transform (DWT) Defining the Discrete Wavelet Transform (DWT) can formulate DWT via elegant pyramid algorithm defines W for non-haar wavelets (consistent with Haar) computes W = WX using O(N) multiplications brute force

More information

Part III Spectrum Estimation

Part III Spectrum Estimation ECE79-4 Part III Part III Spectrum Estimation 3. Parametric Methods for Spectral Estimation Electrical & Computer Engineering North Carolina State University Acnowledgment: ECE79-4 slides were adapted

More information

1 Random walks and data

1 Random walks and data Inference, Models and Simulation for Complex Systems CSCI 7-1 Lecture 7 15 September 11 Prof. Aaron Clauset 1 Random walks and data Supposeyou have some time-series data x 1,x,x 3,...,x T and you want

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

OSE801 Engineering System Identification. Lecture 09: Computing Impulse and Frequency Response Functions

OSE801 Engineering System Identification. Lecture 09: Computing Impulse and Frequency Response Functions OSE801 Engineering System Identification Lecture 09: Computing Impulse and Frequency Response Functions 1 Extracting Impulse and Frequency Response Functions In the preceding sections, signal processing

More information

The Discrete Fourier Transform (DFT) Properties of the DFT DFT-Specic Properties Power spectrum estimate. Alex Sheremet.

The Discrete Fourier Transform (DFT) Properties of the DFT DFT-Specic Properties Power spectrum estimate. Alex Sheremet. 4. April 2, 27 -order sequences Measurements produce sequences of numbers Measurement purpose: characterize a stochastic process. Example: Process: water surface elevation as a function of time Parameters:

More information

Wavelet Methods for Time Series Analysis. Motivating Question

Wavelet Methods for Time Series Analysis. Motivating Question Wavelet Methods for Time Series Analysis Part VII: Wavelet-Based Bootstrapping start with some background on bootstrapping and its rationale describe adjustments to the bootstrap that allow it to work

More information

Lecture 27 Frequency Response 2

Lecture 27 Frequency Response 2 Lecture 27 Frequency Response 2 Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/6/12 1 Application of Ideal Filters Suppose we can generate a square wave with a fundamental period

More information

Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t

Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t 2 X t def in general = (1 B)X t = X t X t 1 def = ( X

More information

Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water

Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water p. 1/23 Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water Hailiang Tao and Jeffrey Krolik Department

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation

Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation CENTER FOR COMPUTER RESEARCH IN MUSIC AND ACOUSTICS DEPARTMENT OF MUSIC, STANFORD UNIVERSITY REPORT NO. STAN-M-4 Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Periodogram of a sinusoid + spike Single high value is sum of cosine curves all in phase at time t 0 :

Periodogram of a sinusoid + spike Single high value is sum of cosine curves all in phase at time t 0 : Periodogram of a sinusoid + spike Single high value is sum of cosine curves all in phase at time t 0 : X(t) = µ + Asin(ω 0 t)+ Δ δ ( t t 0 ) ±σ N =100 Δ =100 χ ( ω ) Raises the amplitude uniformly at all

More information