Lecture 8 Finite Impulse Response Filters Outline 8. Finite Impulse Response Filters.......................... 8. oving Average Filter............................... 8.. Phase response............................... 8.. agnitude response............................ 8.. Fast A filter implementation...................... 4 8. Weighted oving Average Filter......................... 8.4 Non-Causal oving Average Filter........................ 6 8. Non-Causal Weighted oving Average Filter.................. 8 8.6 Phase is Important................................. 9 8.7 Differentiation using FIR Filters......................... 9 8.7. Frequency-domain observations...................... 8.7. Higher derivatives............................. 8. Finite Impulse Response Filters The class of causal, LTI finite impulse response (FIR) filters can be captured by the difference equation y[n] = b k u[n k], where is the number of filter coefficients (also known as filter length), is often referred to as the filter order, and b k R are the filter coefficients that describe the dependence on current and previous inputs. The filter length is equal to the length of the finite impulse response, given by h = {b, b,..., b }. From the impulse response, we can directly conclude that FIR filters are always stable (they have poles at z = ). Furthermore, their frequency response is straightforward to calculate from the transfer function: H(z) = h[k]z k z=e j H() = h[k]e jk = b k e jk. FIR filter design methods find the coefficients b k based on a desired frequency response. There are powerful FIR filter design tools available (see, for example, ATLAB fdatool) that can Updated: November, 7
generate almost arbitrary frequency responses. In this lecture, you will learn the concepts that underly FIR filters, and, using the simple example of the moving average filter, how to analyze filters using the tools that we have learned in the course. 8. oving Average Filter We now introduce a very simple type of low-pass (LP) FIR filter: the moving average (A) filter. The A filter averages the current and past inputs to produce its output, and is described by the difference equation y[n] = u[n k], i.e. b k = / for k =,...,. The frequency response of the A filter is therefore: H() = e jk. We can easily see that H() = : a constant signal remains unchanged by the filter. Furthermore, note that, e j H() = e j(k+). Therefore, H()( e j ) = ( e j ) H() = ( e j ) ( e j ), which shows that H() = iff e j = and e j. The zeros of the A filter therefore occur at frequencies = k/ where k is an integer which is not or a multiple of. This can be seen by looking at the magnitude response H() : H(). = = = 4
8.. Phase response The phase response H() of the filter is presented below. H() = = = 4 For small values of, the filter s frequency response can be approximated as: H() + ( j) + + ( j( )). The real part of H() is equal to, and the imaginary part is given by + + + ( ) = ( ) ( ) =. Therefore, the phase can be approximated by ( H() arctan ) ( ) ( ), using the small angle assumption. This approximation is exact until the first zero of H(), as you will show in the problem set. 8.. agnitude response The magnitude response of the filter can be derived as follows: H() = ( e j ) ( e j ) H() = ( cos ) + sin (( cos ) + sin ) = cos + j sin ( cos + j sin ) = cos + cos + sin ( cos + cos + sin ) ( cos ) = ( cos ) = sin ( ) sin ( ) since cos(p) = sin (p).
It follows that H() = sin ( ) sin ( ) = / sin ( ) / sin ( ) sinc ( ) = sinc ( ) ( sinc ) for small. Therefore, the magnitude response of the A filter is approximated by the absolute value of the sinc function for small. This function, shown below, has peaks (lobes) and is not a great LP filter. sinc(w). w 8.. Fast A filter implementation Consider a A filter with coefficients and output given by y[n] = u[n k]. We will now derive an alternative form of the A filter, which is more computationally efficient: (y[n] y[n ]) = u[n k] = u[n] u[n ] y[n] = y[n ] + u[n k ] u[n] u[n ]. As seen in the above difference equation, by referencing the prior output, the summation can be simplified to two terms, irrespective of the filter s order. However, one must be careful: this approach has a flaw that we will now investigate. By adding the term d[n] to the difference equation, we capture the effect of numerical errors (for example, caused by floating-point imprecision): y[n] = y[n ] + u[n] u[n ] 4 + d[n].
Taking the z-transform we obtain: Y (z) = z Y (z) + U(z) z U(z) + D(z) = ( ) z z U(z) + D(z) z = ( + z +... + z +) U(z) + D(z), z where we use that ( z ) = ( z )( + z +... + z + ). From the above, we note that the transfer function from D(z) to Y (z) is z, which is not stable, since it has a pole at z =. We note also that z = + z + z +..., showing that the effect of numerical errors on the output is cumulative. We calculate the inverse z-transform of Y (z) to obtain: y[n] = Comparing this with the direct approach y[n] = u[n k] + n d[n k]. u[n k] + d[n], we note that both forms are the same without errors, but vastly different if numerical errors are considered. 8. Weighted oving Average Filter A weighted moving average filter can be described by the difference equation y[n] = S w k u[n k], where w k is a decreasing function of k and denotes the weight given to the input u[n k]; S is the normalization constant chosen such that the sum of all filter coefficients equals. A common choice, which we will herein refer to as the WA filter, is w k = ( k) and S = + +... + = +... + S = ( + ) S = ( + ). The difference between the WA and A filters can be clearly seen in their impulse responses:
+ A WA n The WA filter places less emphasis on older inputs. This results in a less aggressive filter, with a better (smaller) phase response: H(). A ( = ) WA ( = ) 4 H() 4 8.4 Non-Causal oving Average Filter We now consider the non-causal moving average filter with impulse response h = {...,,,...,,...,,,...}, where the number of coefficients is odd. inputs, with equal weighting. This filter includes past, current, and future h[n] n 6
The filter s frequency response is given by H() = j(k ) e j( = e ) H A (), where H A () is the frequency response of the causal A filter. This relationship shows that the frequency response of the non-causal A filter is that of a causal A filter with an added phase of ( ). Their magnitude response is the same. Example Consider the non-causal A filter with =, which is given by the difference equation y[n] = u[n ] + u[n] + u[n + ]. The filter s frequency response is: H(). H() Note that for < / the phase is zero: a very desirable property. However, the phase of the first lobe is 8. This results in signal inversion, as we will now demonstrate. Let {u[n]} = {( ) n } and note that =. Applying this input to the filter gives the output: y[n] = ( )n = u[n] for all times n. This is not desirable: Not only does the non-causal A filter not remove the high-frequency signal, it inverts it! 7
8. Non-Causal Weighted oving Average Filter Consider now the non-causal weighted moving average filter, with impulse response given by h[n] = S h[n] for all times n, where S = h[n], and where { h[n]} is given by k= + h[n] n Let us now have a look at the frequency response of a non-causal WA filter for = : H(). 4 H() 4 As expected, the non-causal WA filter is a LP filter. Furthermore, the filter has zero-phase. This is a very nice property to have, and deserves further investigation. Consider a causal A filter with = and transfer function H(z) = + z + z 8
and the corresponding anti-causal filter with transfer function H(z ). We then have that H(z)H(z ) = + z + z + z + z = 9 ( + z + z + z + + z + z + z + ) = 9 ( + (z + z ) + (z + z )), which is a non-causal WA filter with = coefficients. As seen in Lecture 7, the cascade of these systems will have zero phase. In general, if H(z) is the transfer function of a A filter with coefficients, then H(z)H(z ) is a non-causal WA filter with coefficients. 8.6 Phase is Important Phase is a measure of delay caused by the system. The negative ratio H()/ is called the phase delay of a filter, which states how many samples a sinusoid at frequency is delayed by the filter. If H() is linear in, the filter is said to have linear phase, and the phase delay of the filter is constant: sinusoids at all frequencies [, ] are shifted by the same number of samples. A linear phase is good for some applications (e.g. audio) as it ensures no signal deformations; however, it is not always important for other applications, such as control. 8.7 Differentiation using FIR Filters FIR filters can also be used to approximate the derivative of their input signal. To demonstrate this, let y(t) = u(t). This can be approximated, for example, u(t) u(t τ) causally by y(t) ; τ anti-causally by u(t + τ) u(t) y(t) ; τ non-causally by u(t + τ) u(t τ) y(t). τ If τ is the sampling period, we have the following discrete-time approximations of the derivative: Causal y C [n] = (u[n] u[n ]); Anti-causal y A [n] = (u[n + ] u[n]); Non-causal y N [n] = (u[n + ] u[n ]). Note that each LCCDE describes an LTI FIR system. responses of the above systems. We now calculate the frequency 9
Causal Anti-causal Non-causal H C (z) = z H C () = e j = e j/ ( e j/ e j/) = j e j/ sin /. H A (z) = zh C (z) H A () = j ej/ sin /. H N (z) = (z z ) H N () = (e j e j ) = j sin. H() / Continuous-time Anti-causal Causal Non-causal H() 8.7. Frequency-domain observations If u(t) = e jωt and u(t) U(ω), then u(t) = jωe jωt and u(t) jωu(ω): differentiation in the time domain is equivalent to multiplication by jω in the frequency domain. Considering now the frequency response of the FIR systems, shown above, we note that all systems have the desired behavior near = : a frequency response that resembles jω (with ω = / ). Furthermore, both causal and anti-causal approximations have a frequency response with maximum at =, as desired. However, the non-causal, symmetric approximation has a frequency response with maximum at = /: it is not a good approximation at high frequencies; for example, at =, H N () =.
8.7. Higher derivatives The above can be generalized to higher derivatives by successive applications of the first derivative; for example, the derivative y(t) = ü(t) can be causally approximated by Therefore, y(t) u(t) u(t τ) τ u(t) u(t τ) u(t τ) + u(t τ) τ. y[n] = Ts (u[n] u[n ] + u[n ]), if τ =.