ECE 8440 Unit 13 Sec0on 6.9 - Effects of Round- Off Noise in Digital Filters 1 We have already seen that if a wide- sense staonary random signal x(n) is applied as input to a LTI system, the power density spectrum of the output y(n) is related to the power density spectrum of the input through the following relaon: Φ yy (e jω ) = H(e jω ) Φ xx (e jω ). Assume that the input e(n) is zero- mean white noise due to round- off and the average power of this noise is σ e. Also assume that the frequency response of that poron of the system between the entry point of the noise signal e(n) and the system output f(n) is H ef (e jω ). The power density spectrum of the noise in the output is therefore P ff (ω) = Φ ff (e jω ) = σ e H ef (e jω ). (equaon 6.103) Recall that Φ ff (e jω ) is the Discrete Time Fourier Transform pair with the autocorrelaon funcon φ ff (m) which is defined (for the case of real signals which are wide- sense staonary) as φ ff (m) = E{(f(n)f(n + m))}.
The average power of the output noise due to round- off error is σ f = E[f (n)] which is equal to φ ff (0). Therefore, σ f = φ ff (0) = 1 π = σ e π π π π π Φ ff (e jω ) dω H ef (e jω ) dω. (equaon 6.104) Applying Parseval's relaon, we can also express the above as σ f = φ ff (0) = σ e h ef (k). (equaon 6.105) k = The integral in equaon 6.104 and the summaon in equaon 6.105 do not in general have simple soluons for higher order systems. Therefore, to obtain a more efficient way to solve for ϕ ff (0) we turn to z- transforms. In order to establish background for the z- transform approach, we need to go back and generalize the development we did before in Unit 5 which led to the expression Φ yy (e jω ) = C hh (e jω )Φ xx (e jω ) = H(e jω ) Φ xx (e jω ). The first step is to generalize the previous development to permit complex- valued inputs and complex values in h(n) and x(n). As before, we assume an y(n) to be the response of an LTI system to a wide- sense staonary input.
The autocorrelaon of the output process {y(n)} can be expressed as { } (n,n + m) = E y * (n)y(n + m) E h * (k) x * (n k)h(r)x(n + m r) k = r= { } = h * (k) h(r) E x * (n k)x(n + m r). k= r= Since{x(n)} is assumed to be wide- sense staonary, 3 { } = φ xx (m + k r). E x * (n k)x(n + m r) Therefore, the right hand side of the original equaon is independent of n, and the ley- hand side must also be independent of n, that is, (n,n + m) = (m). We can therefore write (m) = h * (k) h(r) φ xx (m + k r). k = r= Now let and sum over instead of r: = r k (m) = h * (k) h( + k) φ xx (m ) k = = = φ xx (m ) h * (k) h( +k). = k =
Now define 4 c hh ( ) = k = which can be also be wri[en as We can now write as which can also be represented as Therefore, h * (k) h( +k) c hh ( ) = h * ( ) h( ). (m) = = (m) φ xx (m )c hh ( ) (m) = φ xx (m) c hh (m). (m) = φ xx (m) h * (m) h( m). (This result is needed in developing the following material from Appendix 5, which in turn is needed as background for Secon 6.9.) Appendix A- 5: Use of the z- Transform in Average Power Computaons The z- transform cannot be applied to an autocorrelaon funcon such as the signal y(n) is non- zero. To see that this is true, consider a signal x(n) = x 1 (n) + m x, where m is the mean of x(n) and x (n) x is a zero- mean signal. 1 (m) if the mean of
The z- transform of the m x component is 5 M x (z) = 1 m x z n + m x z n. n= n=0 The first summaon converges to with Region of Convergence of The second summaon converges to with Region of Convergence of z m x z 1 z < 1. m x z z 1 Since is there no overlapping Region of Convergence for the two summaons, the z- transform of the constant signal does not exit. m x z > 1. In general, to apply the z- transform to analyze random signals, it is therefore necessary to use the autocovariance funcon instead of the autocorrelaon funcon, since the mean is removed in the definion of the autocovariance funcon: γ xx (m) = E{[x(n) m x ] i [x(n + m) m x ]}. When m x = 0, the autocovariance is equal to the autocorrelaon, since in general γ xx (m) = ϕ xx (m) m x. Based on the above discussion, consider a random signal x(n) with mean mean signal as x 1 (n) = x(n) m x. m x. Now define a zero-
The autocorrelaon of Substung x 1 (n) φ x1 x 1 (m) = E{x 1 * (n)x 1 (n + m)}. x 1 (n) = x(n) m x is into the above expression gives 6 φ x1 (m) = E{ x(n) m x 1 ( x ) * ( x(n + m) m x )} = γ xx (m),the autocovariance of x(n). If x 1 (n) is the input to a LTI system, we know from previous developments in this Unit that the autocorrelaon of the output y 1 (n) is (m) = φ x1 x 1 (m) h * (m) h( m). We have also just shown that φ x1 x 1 (m) = γ xx (m). Therefore, (m) = γ xx (m) h * (m) h( m). By definion, (m) (m) = E{y 1 * (n)y 1 (n + m)}. can also be expressed as
Recall that y 1 (n) = x 1 (n) h(n) = [x(n) m x ] h(n) = y(n) m x h(k). k= 7 We can therefore rewrite the previous expression for (m) = E{[y * (n) m x * Note that if the input to an LTI system is the random signal x(n), the expected value of the output is h * (k)][y(n + m) m x h(k) k= E[y(n)] = m y = E{ h(k)x(n k)} = m x h(k). k= Therefore, the previous expression for (m) = E{[y * (n) m y * ][y(n + m) m y }. (m) as can be now expressed as which, by definion, is equal to the autocovariance of y(n). That is, k= φ y1 y 1 (m) ]}. k= φ y1 y 1 (m) = (m). Using this result in the previous expression of gives the important result: φ y1 y 1 (m) = γ xx (m) h * (m) h( m) (m) = γ xx (m) h * (m) h( m) = γ xx (m) c hh (m).
8 We are now prepared to use a z- transform approach for finding (0). We begin by expressing the z- transform of c hh ( ) : Ζ{c hh ( )} = = c hh ( )z = h * (k) h( +k) z. = k= Now let m = + k and sum over m instead of : m= k = h * (k) h(m) z (m k) = h(m)z m h * (k) z k = H(z) h(k) ( z*) k m= k = k = = H(z)H * 1 z * = C (z). hh * 1 = H(z) h(k) ( k = z * ) k * Then, since Γ yy (z) = Γ xx (z)c hh (z) = Γ xx (z)h(z)h * 1 z *. (m) = γ xx (m) c hh (m) (as already shown) (equaon A.58)
What are ulmately trying to find is white noise with γ xx (m) = φ xx (m) = σ x δ(m). The corresponding z- transform is Γ xx (z) = σ x and therefore 1 Γ yy (z) = σ x H(z)H * z *. (0) = E[y (n)] for the case where the input is zero- mean The value of (0) can be found by evaluang the inverse z- transform of Γ yy (z) for the case of m=0. 9 Consider the case of a stable and causal system whose H(z) has the form: H(z) = A M m=1 N k =1 (1 c m z 1 ) (1 d k z 1 ) where M < N. (equaon A.6) The Region of Convergence is where max{ d k k } < 1. z > max { d k k }
If the input is zero mean white noise with, the expression for Γ yy (z) becomes Γ xx (z) = σ x (1 c Γ yy (z) = σ x H(z)H * 1 m z 1 )(1 c * m z) z * = σ A m=1. x N (1 d k z 1 )(1 d * k z) This can be expressed in paral fracon form as M k=1 (equaon. A.63) 10 N * A Γ yy (z) = σ k A x k k =11 d k z 1 1 (d * k ) 1 z 1 where the A k can be found from A k = H(z)H * 1 z * (1 d k z 1 ) z=d k. (equaon A.64) (equaon A.65) Since max { d, the pole at each is inside the unit circle, and the pole at each is k k } < 1 d (d * k ) 1 k outside the unit circle. The region of convergence of Γ yy (z) is therefore max k { d k }< z <min{ (d * k k ) 1 }. The inverse z- transform will therefore be two- sided and have the form: N (n) = σ x A k (d k ) n N u(n) + A * k (d * k ) n u( n 1) k=1 k=1. We are interested in (0) = σ x N k=1 A k = σ y (0). which can be found as (see equaon A.66)
Example A. Noise Power Output of a nd Order IIR Filter Consider a system having unit sample response h(n) = rn sin θ(n + 1) u(n). sinθ Using the fact, from z- transform tables, that Z{sin(θn)u(n)} = z sinθ z z cos θ + 1. and using the z- transform property that Z{a n x(n)} = X(z) z=(z/a), (equaon A.68) 11 the z- transform of the above h(n) can be found to be H(z) = 1 (1 re jθ z 1 )(1 re jθ z 1 ). To link this with previous notaon, one pole is at If the input to this system is white noise with total power = the z- transform of the autocovariance of the system output y(n) is Γ yy (z) = σ x H(z)H * 1 z * 1 1 = σ x (1 re jθ z 1 )(1 re jθ z 1 ) (1 re jθ z)(1 re jθ z). and the other pole is at d 1 = re jθ d = re jθ. σ x, (equaon A.69)
The paral fracon representaon of the above is 1 * * A σ 1 x (1 re jθ z 1 ) A 1 A (1 1 + r e jθ z 1 (1 re ) jθ z 1 ) A (1 1 r e jθ z 1 ) where A 1 = and 1 1 (1 re jθ z 1 ) (1 re jθ z)(1 re jθ z) z=re jθ 1 1 A = (1 re jθ z 1 ) (1 re jθ z)(1 re jθ. z) z=re jθ AYer making the indicated substuons and simplifying, the sum of and is 1+ r 1 1 r 1 r cos(θ) + r. 4 Therefore, (0) = E{y 1+ r 1 (n)} = σ x 1 r 1 r cos(θ) + r. 4 A 1 A (equaon A.71)
Note that the above expression is much more easily evaluated that the alternave methods for finding which are and E{y (n)} E{y (n)} = σ x h(n) r n sin θ(n + 1) = σ x sin θ k = n=0 13 E{y (n)} = 1 π π π σ x H(e jω ) dω = σ π x dω π. (1 re jθ e jω )(1 re jθ e jω ) π