Chapter 4 Fourier Analysis and Power Spectral Density 4. Fourier Series and ransforms Recall Fourier series for periodic functions for x(t + ) = x(t), where x(t) = 2 a + a = 2 a n = 2 b n = 2 n= a n cos 2πnt + b n sin 2πnt ] x(t) dt x(t) cos nωt dt ( a 2 = x ) x(t) sin nωt dt. ( ω = 2π ) (4.) (4.2) Dirichlet heorem: For x(t) periodic on t <, if x(t) is bounded, has a finite number of maxima, minima, and discontinuities, then the Fourier Series Eq. (4.) converges t to 2 x(t+ ) + x(t )]. Complex form of Eq. (4.) is better for experimental applications. Using Euler s (or de Moivre s) formulas we get: cos ωt = 2 sin ωt = 2i Using above Eq. (4.) can be rewritten as: x(t) = ( e iωt + e iωt) ( e iωt e iωt). (4.3) X n e inωt, (4.4) 3
32 CHAPER 4. FOURIER ANALYSIS AND POWER SPECRAL DENSIY where X = a 2 X ±n = 2 (a n ib n ). (4.5) Please also note that X n = X n. herefore: X n = x(t) e inωt dt = /2 x(t) e inωt dt. (4.6) If signal x(t) is not periodic, we let f n = n (i.e., in Eq. (4.6) nω = 2πf n). Now, we define a function X(f) by X(f n ) = X n (i.e., X n = X(f n )/ ) to get the following: x(t) = X n e inωt = n= X(f n) e i2πfnt = n= X(f n ) e i2πfnt f n, (4.7) where we used the fact that f n = f n+ f n = n+ n =. herefore, in the limit as and f n in Eq. (4.9), we get our signal in time domain as x(t) = X(f) e i2πft df. (4.8) Now, assuming that everything converges and using Eq. (4.6) we get the corresponding frequency domain expression X(f) = x(t) e i2πft dt. (4.9) herefore, x(t) X(f) are Fourier ransform pair, where x(t) is in time domain and X(f) is in frequency domain. 4.. Several Important Properties of Fourier ransforms We denote a Fourier transform (F) as X(f) = F(x(t)) and x(t) = F (X(f)). Now, we can write several of the properties of F:. Linearity: Fαx(t) + βy(t)] = αx(f) + βy (f) 2. Duality: x(t) X(f) X(t) x( f) 3. Conjugation: x(t) X(f) x (t) X ( f). herefore, for real signal x(t), X(f) = X ( f). his instead gives: X(f) 2 = X(f)X (f) = X ( f)x( f) = X( f) 2, (4.) i.e., for real x(t), X(f) is symmetric. 4. Convolution: ] F x(τ)y(t τ) dτ = X(f)Y (f) Fx y],
4.. FOURIER SERIES AND RANSFORMS 33 where x y indicates time convolution between x(t) and y(t). In addition, Fxy] = X(φ)Y (f φ) dφ X Y, where X Y indicates frequency convolution between X(f) and Y (f) (also, X Y = Y X). 5. Differentiation: 6. ime Scaling and Shifting: F d k ] x dt k = (i2πf) k X(f), x(at + b) e2πif b a a X ( ) f. a heorem: Provided x(t) L (i.e., x(t) dt < and x(t) has a finite number of maxima, minima, and discontinuities) X(f) exists, and F X(f)] for x continuous at t, x(t) = 2 x(t+ ) + x(t )] for x discontinuous at t.. here is a problem with the above theorem if we consider the following: sin t dt =, which can be fixed using theory of generalized functions (or distributions), duality and other basic properties. 4..2 Basic Fourier ransform Pairs. Delta (δ) function : his actually is a generalized function or distribution defined as: δ(t)dt =. (4.) Now, by definition of δ(t), (f) = e 2πift δ(t t )dt = e 2πift, also called sifting property. Note that is a complex constant with =. herefore: δ(t t ) e 2πift, (4.2) and in particular, δ(t). herefore, by duality property e 2πift δ(f f ), (4.3) and in particular, δ(f).
34 CHAPER 4. FOURIER ANALYSIS AND POWER SPECRAL DENSIY Figure 4.: Signal modulation in the frequency domain 2. rigonometric functions: cos(2πf t) = 2 ( e 2πif t + e 2πift), (4.4) so Similarly, F cos(2πf t)] = δ(f f ) + δ(f + f ) 2 F sin(2πf t)] = δ(f f ) δ(f + f ) 2i. (4.5). (4.6) 3. Modulated trigonometric functions: As an example consider x(t) cos(2πf c t) X(f) C(f), where f c is called carrier frequency and C(f) is given by Eq. (4.5). hen, x(t) cos(2πf c t) X(s) δ(f f c s) + δ(f + f c s) ds, (4.7) 2 where on the right hand side we have a convolution integral, which gives: x(t) cos(2πf c t) X(f f c) + X(f + f c ) 2. (4.8) herefore, if we already know X(f), modulation scales and shifts it to ±f c as shown in Fig. 4.. 4.2 Power Spectral Density he autocorrelation of a real, stationary signal x(t) is defined to by R x (τ) = Ex(t)x(t + τ)]. he Fourier transform of R x (τ) is called the Power Spectral Density (PSD) S x (f). hus: S x (f) = R x (τ) e i2πft dτ. (4.9) he question is: what is the PSD? What does it mean? What is a spectral density, and why is S x called a power spectral density? o answer this question, recall that X(f) = x(t) e i2πft dt. (4.2)
4.2. POWER SPECRAL DENSIY 35 o avoid convergence problems, we consider only a version of the signal observed over a finite-time, x = x(t)w (t), where hen x has the Fourier transform for t /2, w = for t > /2. X (f) = = /2 (4.2) x (t) e i2πft dt, (4.22) x(t) e i2πft dt, (4.23) and so ] /2 ] /2 X X = x(t) e i2πft dt x (s) e i2πfs ds, (4.24) = /2 /2 x(t)x(s) e i2πf(t s) dtds, (4.25) where the star denotes complex conjugation and for compactness the frequency argument of X has been suppressed. aking the expectation of both sides of Eq. (4.26) 2 E X X ] = /2 /2 E x(t)x(s)] e i2πf(t s) dtds. (4.26) Letting s = t + τ, one sees that Ex(t)x(s)] Ex(t)x(t + τ)] = R x (τ), and thusb E X X ] = /2 /2 R x (τ) e i2πf(t s) dtds. (4.27) o actually evaluate the above integral, the both variables of integration must be changed. Let τ = f(t, s) = s t (as already defined for Eq. (4.3)) (4.28) η = g(t, s) = s + t. (4.29) hen, the integral of Eq. (4.3) is transformed (except for the limits of integration) using the change of variables formula: 3 /2 /2 R x (τ) e i2πf(t s) dtds = R x (τ) e i2πfτ J dτdη, (4.3) his restriction is necessary because not all of our signals will be square integrable. However, they will be mean square integrable, which is what we will take advantage of here. 2 o understand what this means, remember that Eq. (5) holds for any x(t). So imagine computing Eq. (6) for different x(t) obtained from different experiments on the same system (each one of these is called a sample function). he expectation is over all possible sample functions. Since the exponential kernel inside the integral of Eq. (6) is the same for each sample function, it can be pulled outside of the expectation. 3 his is a basic result from multivariable calculus. See, for example, I.S. Sokolnikoff and R.M. Redheffer, Mathematics of Physics and Modern Engineering, 2 nd edition, McGraw- Hill, New York, 966.
36 CHAPER 4. FOURIER ANALYSIS AND POWER SPECRAL DENSIY Figure 4.2: he domain of integration (gray regions) for the Fourier transform of the autocorrelation Eq. (7): (left) for the original variables, t and s; (right) for the transformed variables, η and τ, obtained by the change of variables Eq. (4.28). Notice that the square region on the left is not only rotated (and flipped about the t axis), but its area is increased by a factor of J = 2. he circled numbers show where the sides of the square on the left are mapped by the change of variables. he lines into which the t and s axes are mapped are also shown. where J is the absolute value of the Jacobian for the change of variables Eq. (4.28) given by df df J = dt ds = = 2. (4.3) dg dt dg ds o determine the limits of integration needed for the right hand side of Eq. (4.3), we need to refer to Fig. 4.2, in which the domain of integration is plotted in both the original (t, s) variables and the transformed (τ, η) variables. Since we wish to integrate on η first, we hold τ fixed. For τ >, a vertical cut through the diamond-shaped region in Fig. 4.2 (right) shows that + τ η τ, whereas for τ < one finds that τ η + τ. Putting this all together yields: E X X ] = τ ( τ ) R x (τ) e i2πfτ dηdτ = τ ] R x (τ) e i2πfτ dτ. (4.32) Finally, dividing both sides of Eq. (4.32) by and taking the limit as gives lim E X X ] = lim = lim = = S x (f). τ ] R x (τ) e i2πfτ dτ R x (τ) e i2πfτ dτ R x (τ) e i2πfτ dτ (4.33)
4.2. POWER SPECRAL DENSIY 37 hus, in summary, the above demonstrates that S x (f) = lim E X (f) 2]. (4.34) Recalling that X (f) has units SU/Hz (where SU stands for signal units, i.e., whatever units the signal x (t) has), it is clear that E X (f) 2] has units (SU/Hz) 2. However, / has units of Hz, so that Eq. (4.33) shows that the PSD has units of (SU 2 )/Hz. 4 Although it is not always literally true, in many cases the mean square of the signal is proportional to the amount of power in the signal. 5 he fact that S x is therefore interpreted as having units of power per unit frequency explains the name Power Spectral Density. Notice that power at a frequency f that does not repeatedly reappear in x (t) as will result in S x (f ), because of the division by in Eq. (4.34). In fact, based on this idealized mathematical definition, any signal of finite duration (or, more generally, any mean square integrable signal), will have power spectrum identical to zero! In practice, however, we do not let extend much past the support min, max ] of x (t) ( min / max is the minimum (respectively, maximum) for which x (t) = ). Since all signals that we measure in the laboratory have the form y(t) = x(t) + n(t), where n(t) is broadband noise, extending to infinity for any signal with finite support will end up giving S x S n. We conclude by mentioning some important properties of S x. First, since S x is an average of the magnitude squared of the Fourier transform, S x (f) R and S x (f) for all f. A simple change of variables in the definition Eq. (4.9) shows that S x (f) = S x (f). Given the definition Eq. (4.9), we also have the dual relationship Setting τ = in the above gives which, for a mean zero signal gives R x (τ) = R x () = E x(t) 2] = σ 2 x = S x (f) e i2πfτ df. (4.35) S x (f) df, (4.36) S x (f) df, (4.37) Finally, if we assume that x(t) is ergodic in the autocorrelation, that is, that /2 R x (τ) = Ex(t)x(t + τ)] = lim x(t)x(t + τ)dt, 4 Of course, the units can also be determined by examining the definition of Eq. (4.9). 5 his comes primarily from the fact that, in electrical circuits, the power can be written in terms of the voltage as V 2 /Z, or in terms of the current as I 2 Z, where Z is the circuit impedance. hus, for electrical signals, it is precisely true that the mean square of the signal will be proportional to the power. Be forewarned, however, that the mean square of the scaled signal, expressed in terms of the actual measured variable (such as displacement or acceleration), will not in general be equal to the average mechanical power in the structure being measured.
38 CHAPER 4. FOURIER ANALYSIS AND POWER SPECRAL DENSIY where the last equality holds for any sample function x(t), then Eq. (4.37) can be rewritten as he above relationship is known as Parsevals Identity. /2 lim x(t) 2 dt = S x (f) df. (4.38) his last identity makes it clear that, given any two frequencies f and f 2, the quantity f2 f S x (f) df represents the portion of the average signal power contained in signal frequencies between f and f 2, and hence S x is indeed a spectral density. 4.3 Sample Power Spectra. White noise: S xx (f) =, where we have power at all frequencies. he corresponding autocorrelation is R xx (τ) = δ(τ), see Fig. 4.3. Figure 4.3: White noise signal has power at all frequencies and is uncorrelated for τ. 2. Band limited noise: S xx (f) = W (f) =, f f BW, otherwise R xx (τ) = = fbw W (f)e i2πfτ df e i2πfτ df f BW e i2πf BW τ e i2πf BW τ ] = 2iπτ = sin (2πf BW τ) πτ. (4.39) For the corresponding graphs refer to Fig. 4.4
4.3. SAMPLE POWER SPECRA 39 Figure 4.4: Band limited noise has a correlation time of 2f BW. Problems Problem 4. Create a sample {x n } 24+4 n= of uncorrelated Gaussian random variables (command randn in Matlab). Now apply the moving average filter s n = /5 7 i= 7 x n+i to obtain 24 correlated Gaussian variates. Estimate the power spectrum (type > help pwelch in Matlab) for both data sequences and observe the differences. Problem 4.2 Create two time series: () {x n } 496 n= of uncorrelated Gaussian random variables (command randn in Matlab), and (2) deterministic evolution of the Ulam map {y n } 496 n=, which follows the rule y =. and y n+ = 2y 2 n. he values of y n are measured through a nonlinear observation function s n = arccos( y n )/π. Compare the mean, variance and the power spectra of the two time series.