Properties of Linear Translation Invariant (LTI) Systems W. C. Troy March 31, 2006 (1.) Continuous case. (a.) Definition of an LTI system. (b.) Example 1. The generation of red noise from unifrom white noise. (c.) Proof that the power spectrum S Y (ω) is real and non-negative. (2.) Discrete case. (a.) Definition of discrete power spectrum. (b.) White noise. (c.) Proof that the power spectrum S Y (Ω) is real and non-negative 1 The continuous case (a.) Definition of an LTI system. Let X(t) be a continuous time random variable (input). The linear system is Y (t) = T (X(t)) (1.1) It is assumed that the system is time invariant, i.e. Y (t t 0 ) = T (X(t t 0 )) = Y (t) t and t 0. (1.2) When the system (1.1) is linear and translationally invariant system it is referred to as an LTI system. The key to everything is to analysis is write the input function X(t) in the integral equation form X(t) = δ(λ)x(t λ)dλ. (1.3) Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, U.S.A. 1
Substitute (1.3) into (1.1) and get Y (t) = h(λ)x(t λ)dλ, (1.4) where h(t) = T (δ(t)) (1.5) is the impulse response function for the linear system. Associated with h(t) is the frequency response of the system, i.e. the Fourier transform H(ω) = h(λ)exp( iωλ)dλ ω R. (1.6) The autorcorrelation function for Y (t) is R Y (t, s) = E[Y (t)y (s)]. (1.7) Substitution of (1.4) into (1.7) gives R Y (t, s) = If X is a WSS continuous random variable then (1.8) becomes R Y (t, s) = Let τ = s t + α β. Then (1.9) further reduces to R Y (τ) = h(α)h(β)r X (t α, s β)dαdβ (1.8) h(α)h(β)r X (s t + α β)dαdβ (1.9) h(α)h(β)r X (τ + α β)dαdβ (1.10) Remark This proves that Y (t) = T (X(t)) is a WSS r.v. if X(t) is a WSS r.v. Next, the frequency power spectrum fpr Y is S Y (ω) is defined by R Y (τ)exp( iωτ)dτ ω R. (1.11) Recall that S Y (ω) is real, S Y ( ω) = S Y (ω) and S Y (ω) 0 ω R. (1.12) For example, R Y (τ) is real and even since R X is real and even. The property S Y power.pdf. Next, substitute (1.10) into (1.11) and obtain 0 ais proved in Let η = τ + α β. Then (1.13) separates into the triple product h(α)exp(iωα)dα h(α)h(β)r X (τ + α β)dαdβexp( iωτ)dτ ω R. (1.13) h(β)exp( iωβ)dβ 2 R X (η)exp( iωη)dη ω R. (1.14)
From this we obtain the important formula The inverse of S Y (ω) is given by H(ω) 2 S X (ω) (1.15) R Y (τ) = 1 Substitution of (1.15) into (1.16) gives another important formula, namely R Y (τ) = 1 S Y (ω)exp(iωτ)dω (1.16) H(ω) 2 S X (ω)exp(iωτ)dω (1.17) The quantity E(Y 2 (t)) is defined to be the total power of process Y (t). Note that From this and (1.16) it follows that Total Power = E(Y 2 (t)) = R Y (t, t) = R Y (t t) = R Y (0)). (1.18) Total Power = R Y (0) = 1 S Y (ω)dω (1.19) The area under the curve S Y (ω) is the total power of process Y (t). Finally, we combine (1.17) and (1.19) to obtain the famous formula Total Power = E(Y 2 (t)) = R Y (0) = 1 H(ω) 2 S X (ω)dω. (1.20) (b.) Example 1. The generation of red noise from unifrom white noise Red noise Y (t) is obtained from white noise X(t) by solving the equation Y (t) = ay (t 1) + X(t) t 1, (1.21) where 0 < a < 1. Our goal is to derive the red noise autocorrelation function R Y (τ), and the associated power spectrum function S Y (ω). These functions are defined by H(ω) 2 S X (ω) and R Y (τ) = 1 Because X(t) is uniform white noise, it must be the case that The function H(ω) is defined by S Y (ω)exp(iωτ)dω. (1.22) R X (τ) = σ 2 δ(τ) and S X (ω) = σ 2 ω. (1.23) where y(t) is the solution of H(ω) = y(t)exp( iωt)dt, (1.24) y(t) = ay(t 1) + δ(t). (1.25) 3
Taking the Fourier transfrom of both sides of (1.25) gives y(t)exp( iωt)dt = a Let η = t 1. Then (1.26) becomes y(t)exp( iωt)dt = aexp( iω) y(t 1)exp( iωt)dt + δexp( iωt)dt. (1.26) y(η)exp( iωη)dη + δ(t)exp( iωt)dt. (1.27) Substituting y(t)exp( iωt)dt = H(ω) and δ(t)exp( iωt)dt = 1 into (1.27) gives It follows from (1.22) and (1.28) that H(ω) = 1 1 aexp( iω) ω. (1.28) σ 2 (1 aexp( iω))(1 aexp(iω)) = σ 2 1 2acos(ω) + a 2. (1.29) Finally, combine (1.22) and (1.29), and use the Residue Theorem to obtain R Y (τ) = 1 σ 2 exp(iωτ) σ2 dω = (1 aexp( iω))(1 aexp(iω)) 1 a 2 aτ τ 0. (1.30) Remarks. For white noise recall that S X (ω) is constant since S X (ω) = σ 2 ω, and R X (τ) is the discontinuous step function given by R X (τ) = σ 2 δ(τ). For red noise, however, S Y (ω) is ocillatory in ω, and R Y (τ) is positive, continuous, and decreases as τ increases from τ = 0. 2 The discrete case Let X(N) be a a discrete random variable. Then X(t) = X(N) if t = N = an integer, and X(t) = 0 if t an integer. (2.1) The linear system (1.1) takes the form Y (N) = T (X(N)) N. (2.2) As in the continuous case, it is assumed that the system is invariant, i.e. Y (N N 0 ) = T (X(N N 0 )) = Y (N) N and N 0. (2.3) Again, we write the input function X(N) in the integral equation form X(N) = Because of (2.1) the integral in (2.4) becomes a Riemann sum, i.e. X(N) = j= δ(λ)x(n λ)dλ. (2.4) δ(λ)x(n λ)dλ = δ(j)x(n j). (2.5) 4
Next, substitute (2.4) into (2.2) and get where Y (N) = h(λ)x(n λ)dλ, (2.6) h(λ) = T (δ(λ)) (2.7) As in (2.5), the integral in (2.6) also becomes the Riemann sum Y (N) = h(λ)x(n λ)dλ = h(j)x(n j), (2.8) j= The frequency response of the system is the Fourier transform H(Ω) = which reduces to the Riemann sum H(Ω) = The autorcorrelation function for Y is h(λ)exp( iωλ)dλ Ω R, (2.9) h(j)exp( iωj) Ω R. (2.10) j= R Y (N, M) = E[Y (N)Y (M)]. (2.11) Substitution of (2.8) into (2.11) gives R Y (N, M) = h(j)h(l)r X (N j, M l) (2.12) j= l= If X is a WSS continuous random variable then (2.12) becomes R Y (N, M) = h(j)h(l)r X (M N + j l) (2.13) j= l= Let M = N + k. Then (2.13) further reduces to R Y (N, N + k) = h(j)h(l)r X (k + j l) = R Y (k), (2.14) j= l= where R Y (N, N + k) = R Y (k) when k is an integer, and R Y (k) = 0 when k is not an integer. The frequency power spectrum S Y (Ω) = R Y (k)exp( iωk)dk Ω R. (2.15) reduces to the Riemann sum S Y (Ω) = R Y (k)exp( iωk) Ω R. (2.16) k= 5
Next, substitute (2.14) into (2.16) and get S Y (Ω) = j= l= Let k + j 1 = L. Then (2.17) becomes the triple product S Y (Ω) = j= h(j)exp( iωj) l= h(j)h(l)r X (k + j l)exp( iωk)dk Ω R. (2.17) h(l)exp(iωl) R X (L)exp( iωl)dl Ω R. (2.18) Thus, S Y (Ω) = H(Ω) 2 S X (Ω) Ω R. (2.19) As in the continuous case, S Y (Ω) is real, S Y ( Ω) = S Y (Ω) and S Y ( Ω) 0 Ω R. (2.20) The inverse of S Y (Ω) is given by R Y (k) = 1 π S Y (Ω)exp(iΩk)dΩ (2.21) π The area under the curve S Y (Ω) is the total power of process Y (t). That is, Total Power = E(Y 2 (N)) = R Y (0) = 1 S Y (Ω)dΩ = H(Ω) 2 S X (Ω) (2.22) 6