Elec4621: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering

Size: px
Start display at page:

Download "Elec4621: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering"

Transcription

1 Elec462: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering Dr D S Taubman May 2, 2 Wiener Filtering A Wiener lter is one which provides the Minimum Mean Squared Error (MMSE) prediction, ȳ [n], of the outcomes, y [n], from one random process, Y [n], usingknowledgeoftheoutcomes,x [n], fromanotherrandomprocess, X [n] Ofcourse, therandomprocessesx [n] and Y [n] cannot be completely independent (or uncorrelated) for the prediction to be of any use As in our previous study of linear prediction, we will assume that all random processes are WSS and we shall conne our attention to predictors which are linear functions of the input samples, x [n] through x [n N] Underthese conditions, our objective is to nd the coecients a through a N,whichminimize the expected power of the error, e [n] =y [n] ȳ [n] That is, we wish to minimize Y J = 2 E = E [n] Ȳ [n] 2 2 N = E Y [n] a k X [n k] By direct dierentiation, or by application of the orthogonality principle, we nd that the minimizing solution must satisfy k= = E [E [n] X [n p]] N = E Y [n] a k x [n k] X [n p] = R YX [p] k= N a k R XX [p k], k= p N

2 ctaubman, 23 ELEC462: Wiener Filtering Page 2 which may be expressed in matrix form as R XX [] R XX [] R XX [N] R XX [] R XX [] R XX [N ] R XX [N] R XX [N ] R XX [] R X a a 2 a N a = R YX [] R YX [] R YX [N] r () The coecients a through a N are the lter taps, h [n] =a n,ofanfirlter which produces ȳ [n] when applied to the source sequence, x [n] This lter is known as the N th order Wiener lter Relationship to Linear Prediction It should be obvious from the form of the solution above that Wiener ltering and linear prediction are close relatives In fact, ordinary linear predictor is a special case of the Wiener ltering problem, in which the target random process is Y [n] =X [n +]Inthiscase, R YX [m] =E [Y [n] X [n m]]=e [X [n +]X [n m]]=r XX [m +] Substituting this into equation (), we obtain the normal equations of an N + st order linear predictor The fact that both Wiener ltering and linear prediction require us to invert the same symmetric, positive denite, Toeplitz matrix, R X,suggeststhat whatever good mechanisms we nd for doing linear prediction should be useful also for Wiener ltering This is indeed the case In fact, we shall nd that adaptive linear predictors lie at the heartofthebestadaptivewienerlters We shall temporarily defer further consideration of this connection, however, until Section 5 2 Unconstrained Wiener Filters It is instructive to consider what happens if we abandon the requirement that the Wiener lter be nite in extent, or even causal In this case, we seek any impulse response sequence, h [n], whichminimizestheexpectedsquarederror, 2 J = E Y [n] h [k] X [n k] k=

3 ctaubman, 23 ELEC462: Wiener Filtering Page 3 and again the solution is given by applying the orthogonality principle, from which we get = E Y [n] h [k] X [n k] X [n p] = R YX [p] k= k= h [k] R XX [p k] = R YX [p] (h R XX )[p], p Z In the Fourier domain, the above relationship becomes ˆh () = S YX() S XX () (2) where S YX () is the cross-power spectral density between Y [n] and X [n], and S XX () is the power spectral density of X [n] Equation (2) provides us with a simple expression for the unconstrained MMSE solution to the general linear prediction problem Taking the inverse DTFT of h ˆ () generally yields an innitely long, non-causal impulse response, which cannot be practically implemented However, we could consider designing arealizablelter (FIR or IIR) to match h ˆ () as closely as possible As we shall see in Section 3, the constrained Wiener solution in equation () can be viewed as the result of such a lter design procedure Equation (2) is of particular interest in many problems since it provides us with an explanation of what the Wiener lter is trying to do As you know, the best way to understand lters is in the frequency domain, since that is where there behaviour can be described simply as that of selectively attenuating (or amplifying) individual frequency components Equation (2) tells us that the optimal Wiener lter should attenuate each frequency component by the ratio between the cross- and auto-power spectra, S YX () and S XX () It is also worth deriving an expression for the prediction error power spectrum, based on the ideal Wiener lter, h ˆ () Todothis,notethat R EE [m] = E [E [n] E [n m]] = E Y [n] Ȳ [n] Y [n m] Ȳ [n m] = R YY [m]+r Ȳ Ȳ [m] R ȲY [m] R Y Ȳ [m]

4 ctaubman, 23 ELEC462: Wiener Filtering Page 4 Taking Fourier transforms, we get S EE () = S YY ()+S Ȳ Ȳ () 2 (S ȲY ()) = S YY ()+ h ˆ () 2 S XX () 2 ˆh () S XY () = S YY ()+ S YX() 2 SYX () S XY () 2 S XX () S XX () = S YY () S YX() 2 S XX () S YX () 2 = S YY () S YY () S XX () The ratio S YX () 2 over S YY () S XX () is a measure of how correlated X and Y are at any particular frequency Its value is guaranteed to lie in the range (completely uncorrelated) to (perfectly correlated) 3 Relationship to Filter Design Equation () bears a striking resemblance to the equations we obtained previously in connection with optimal FIR lter design, based upon a weighted mean squared error criterion In particular, if h ˆ d () is any desired frequency response and ˆ () afrequencyweightingfunction,thefirleastsquaresdesignobjective is to nd an order N lter, h [n], whichminimizes E = 2 ˆ () 2 h ˆ () hd ˆ () 2 d We found that this problem leads to the following linear equations r [] r [] r [N] a d [] r [] r [] r [N ] a d [] = r [N] r [N ] r [] a N d [N] R a d where r [n] is the sequence whose DTFT is ˆr () = ˆ () 2 and d [n] is the sequence whose DTFT is ˆd () = ˆ () 2 ˆ hd () Comparing equations () and (3) we see that the optimal lter design strategy will design the optimal N th order Wiener lter, if we set r [n] =R XX [n] = ˆ () 2 = S XX () (3)

5 ctaubman, 23 ELEC462: Wiener Filtering Page 5 and d [n] =R YX [n] = ˆ () 2 ˆ hd () =S YX () = ˆ h d () = S YX() S XX () = ˆ h () This is a most interesting result It tells us rstly that the optimal constrained Wiener lter, h [n],maybeunderstoodasthebestapproximationtothe ideal unconstrained Wiener lter, h [n], inthesenseofminimizingaweighted squared error between h ˆ () and h ˆ () Secondly, it tells us that the weighting function, ˆ () 2 which makes the lter design and the direct Wiener lter design problems identical is the power spectrum of the source process, X [n] Thatis, atfrequencieswherex [n] has very little power, the accuracy with which h ˆ () approximates h ˆ () is not very important This makes perfect sense, since if very little power is going into the Wiener lter at some frequencies, it does not much matter what the lter does with those frequency components Notice that the weighting function has no dependence at all upon the statistics of the target random process, Y [n] Thirdly, recall that the minimum mean squared error FIR lter design procedure reduces to windowing with the all-or-nothing window, if and only if the frequency weighting function is uniform But this never happens in practical applications of Wiener ltering, since S XX () is uniform only if X [n] is an uncorrelated (white) random process Thus, the optimal N th order Wiener lter is always better (usually much better) than the lter obtained by simply setting h [n] =h [n] over the region of support, n N 2 The LMS Algorithm As we have seen, Wiener lters depend upon knowledge of the cross- and autocorrelation functions, R YX [m] and R XX [m] In some cases we may be able to estimate these functions ahead of time and x thewienerlter accordingly More commonly though, the statistics may vary slowly with time, and we must adapt the Wiener lter coecients on the basis of the outcomes which have been seen so far One way to do this is to divide the time axis into blocks, of length L Ineachblock,wemightcollectL samples from the sequences x [n] and y [n], from which we estimate the correlation statistics and nd the Wiener coecients to be used in the next block Strategies of this form are called block-adaptive One drawback of block-adaptive lters is that the lter coecients are only able to respond to changes in the underlying statistics at block boundaries We could consider estimating the statistics based on a moving window of length L, recalculating the statistics and the Wiener coecients after each new sample arrives, but this would be extremely costly from a computational perspective Block-adaptive schemes may also require substantial memory to hold the past outcomes used to estimate the correlationstatistics In this section, we describe a simple algorithm,whichcanbeusedtoincrementally adjust the Wiener coecients as each new sample arrives The

6 ctaubman, 23 ELEC462: Wiener Filtering Page 6 algorithm does not require us to keep track of past outcomes for the purpose of estimating statistics, although the algorithm is generally quite slow to learn or adapt to changes in the statistics The so-called LMS (Least Mean Squares) algorithm uses a simple gradient descent strategy to incrementally adjust the coecient vector, a =(a,,a N ) t, towards the minimum of our quadratic objective function, 2 N J (a,a,,a N )=E Y [n] a k X [n k] The gradient vector, J = J a J a J a N points in the direction of steepest ascent of the function J (a), sothatj points in the direction of steepest descent The idea behind the LMS algorithm is to update the coecients according to k= a (n+) = a (n) J (n) where J (n) is an estimate of J formed at time n, a (n) is the vector of Wiener coecients at time n, and is a small positive constant which controls the rate at which the Wiener coecients converge to the minimum of the quadratic function, J (a) Thefollowingexampleillustratestheseconceptsforthesimplest case in which there is only one coecient to be adapted Example Consider the parabolic function, J (a )=(a ) 2 +, illustrated in Figure It is important to realize that the parameters,, and are not actually known to the LMS algorithm If they were, we could immediately recognize that J (a ) is minimized by setting a = Now let a (n) denote the n th estimate of a, starting from some arbitrary position, a () If we ignore any inaccuracies in our estimate of the gradient (there will be substantial inaccuracies in practice), the n th gradient estimate becomes J (n) =2 a (n) and the gradient descent algorithm incrementally adjusts a according to a (n+) = a (n) J (n) = a (n) 2 a (n) Evidently, if we are so lucky as to select = 2,weimmediatelyhitthejackpot, getting a (n+) =, afterwhichallsubsequentgradientestimateswillbe If

7 ctaubman, 23 ELEC462: Wiener Filtering Page 7 J ( a ) gradient descent ( n+ ) ( n) a a a Figure : Illustration of gradient descent of the one dimensional quadratic function, J (a ) we select < 2,wemovetowardtheminimizingsolutiona =, whileifwe select > 2,weovershoot,andmayevenwindupwitha(n+) aworsesolution than a (n) Thechallengeistokeep small enough that it is likely to be smaller than 2 Now returning to our adaptive Wiener ltering problem, we have J a E [E [n] X [n ] ] R EX [] J a J = = 2 E [E [n] X [n ] ] = 2 R EX [] J E [E [n] X [n N]] R a EX [2] N To accurately estimate J, wewouldneedtoestimatethecross-correlation between the source process, X [n], andtheerrorprocess,e [n], producedbythe current set of lter parameters But we keep changing the lter parameters Instead, the LMS algorithm works with the following very crude approximation J (n) = 2 e [n] x [n ] e [n] x [n ] e [n] x [n N] That is, the LMS algorithm uses the instantaneous product between e [n] and x [n m] in place of the statistical expectation or any kind of time average Of course, this approximation is likely to be wildly inaccurate However, if we make only very small adjustments to the lter coecients at each time

8 ctaubman, 23 ELEC462: Wiener Filtering Page 8 step (ie, if is very small), the instantaneous approximation errors in J (n) can be expected to average out, so that we move slowly toward the optimal lter coecients When we get there, J should be, butourinstantaneous estimates, J (n),willnotgenerallybezero,sothelter coecients will uctuate about their optimal values The amount of uctuation depends upon the descent parameter, If is very small, the adaptive Wiener lter will converge only slowly to the optimal solution, but the coecients will then remain very stable, unless or until the statistics change If is larger, convergence is faster and the lter can track changes in the statistics more rapidly, but the coecients will tend to bounce around their optimal values, leading to sub-optimal prediction If is too large, convergence might not be achieved at all, as suggested by Example, and the adaptive algorithm will be unstable It can be shown, however, that the LMS algorithm will be stable so long as the descent parameter,, satises < < total input power = (N +)R XX [] where R XX [] is the average power of each of the N +inputs to the Wiener lter 3 Some Applications of Wiener Filtering 3 Deconvolution Alineartime-invariantsystem produces an output r [n] (r for response or received ) from its input s [n] (s for source or sent ), which can be described by convolution, r [n] = h s [k] s [n k] k where h s [k] is the impulse response of the system The object of deconvolution is to recover s [n] from r [n] Thisproblemoccursinanyapplicationwherethe desired signal has been corrupted by a linear time-invariant system Example 2 An audio recording is made using an inexpensive microphone, whose transfer function, ˆ h (), hasahighlynon-uniformmagnituderesponse The input to the microphone may be modeled as the desired signal, s [n], and the poor quality recording is r [n] The task of enhancing the recording may be understood as that of recovering s [n] from r [n] Yourst measure the impulse response of the microphone, h s [k] usefulmethodsforthisweresuggestedin Tutorial 5 You then need to deconvolve Example 3 An amplitude modulation system generates an output waveform, s [n] = k A kg [n Pk] where A k is an information bearing sequence of scaling factors, g [n] is a shaping pulse, which determines the spectral properties of the modulated waveforms, and P is the symbol clock period, which is the number of time steps, n, betweeneachpairoftransmittedsymbols,a k Ifthissignalis

9 ctaubman, 23 ELEC462: Wiener Filtering Page 9 corrupted by additive white noise, the most robust demodulation strategy is to estimate A k as A k = L g [n] s[pk+ n] (4) E g n= = E g (s g)[pk] where the shaping pulse, g [n], hassupport n L, ande g = n g2 [n] is the energy of g [n] Thisisknownasmatchedltering In practice, however, the transmitted waveform is also subject to distortion in the communications channel The channel distortion can often be modeled as an LTI system, with transfer function ˆ h s (), which converts the original modulated waveform, s [n], to a received waveform, r [n] In order to robustly recover the encoded sequence, A k,thechanneldistortionmustrst be undone, which is a deconvolution problem In many cases, the channel transfer function must be adaptively estimated, in which case the deconvolution problem is known as adaptive channel equalization Since the system is LTI, its inverse must also be LTI In fact, deconvolution would appear to be a simple matter: simply convolve r [n] by the reciprocal lter, h r [n], whosefouriertransformsatises ˆh r () = ˆh s () (5) However, this apparent solution has the following major diculties: h ˆ r () may not be a realizable lter In most practical applications, h s [n] does not have a nite order Z-transform, since it arises in connection with aphysicalsystem,asopposedtoadigitallter implementation Consequently, h r [n] may not have nite numbers of poles and zeros Even worse, if h ˆ s () does not have minimum phase, no causally stable inverse lter exists, whether realizable or otherwise 2 ˆ h s () may well be close to zero at some frequencies, Atthesefrequencies, the reciprocal lter, ˆ h r (), mustbeverylarge,whichwilldramatically amplify the eects of noise or numerical round-o errors To overcome the rst diculty mentioned above, one might use one of the lter design techniques considered previously, taking / h ˆ s () to be our desired frequency response, h ˆ d (), anddesigningarealizablelter which approximates ˆh d () as accurately as possible in some suitable sense Unfortunately, this approach will not directly deal with the common problem of noise amplication item(2)above

10 ctaubman, 23 ELEC462: Wiener Filtering Page v[n] y [ n] = s[ n] system ˆ ( ) h s x [ n] = r[ n] Figure 2: LTI system with additive noise To develop a Wiener deconvolution lter which is sensitive to both of the concerns raised above, we will associate Y [n] with the source random process, S [n], andx [n] with the received signal, which we model as X [n] =(h s Y )[n]+v [n] (6) where V [n] models the eect of additive random noise, as shown in Figure 2 The role of the Wiener lter is then to estimate y [n] =s [n], asaccuratelyas possible, from the noisy samples produced by the system in question From equation 6, we may nd the cross- and auto-correlation sequences as R YX [m] = E Y [n] V [n m]+ k = R YV [m]+ h s [k] R YY [m + k] k = R YV [m]+ hs R YY [m] h s [k] Y [n m k] and R XX [m] =E V [n]+ k h s [k] Y [n k] V [n m]+ k h s [k] Y [n m k] = R VV [m]+ h s [k] h s [j] R YY [m + k j]+ h s [k] R YV [m k]+ k j k j = R VV [m]+ h s h s R YY [m]+(h s R YV )[m]+(h s R YV )[m] In most cases of practical interest, the noise is white and uncorrelated with the signal we are trying to estimate (Y [n] =S [n] in this case), so R YV [m] =for all m and R VV [m] = 2 V [m] Wethenget R YX [m] = hs R YY [m] (7) R XX [m] = 2 V [m]+ h s h s R YY [m] (8) h s [j] R VY [m + k]

11 ctaubman, 23 ELEC462: Wiener Filtering Page 3 Intuition from the Unconstrained Solution It is instructive to begin by considering the unconstrained solution Fourier domain, equations (7) and (8) become In the S YX () = h ˆ s () S YY () S XX () = 2 V + h ˆ s () 2 S YY () and the unconstrained Wiener lter is given by ˆh () = S YX() h S XX () = ˆ s () h ˆ s () V /S YY () (9) To see how the possibility of noise is beingfactoredintothissolution,suppose we set 2 V =Thenweget h ˆh () = ˆ s () h ˆ s () 2 = ˆh s () which is the reciprocal lter of equation (5) Evidently, 2 V /S YY () places a lower bound on the denominator in equation (9) At frequencies where h ˆ s () 2 2 V /S YY (), h ˆ () closely follows the reciprocal lter, while at frequencies where h ˆ s () 2 2 V /S YY (), theunconstrainedwienerlter has aresponsecloseto The way to understand this is that wherever ˆ h s () is very small, we do better to concentrate on ltering out the noise, rather than trying to recover the heavily attenuated signal It is important to realize that the Wiener deconvolution lter depends not just on the system impulse response, ˆ h s (), whoseeects we are trying to undo, but also on the power spectrum of the uncorrupted signal, Y [n] =S [n] Since we do not receive this signal, we may have some diculty reliably estimating its statistics Similarly, the noise statistics must also be estimated 32 Decision Feedback Equalization In a number of important applications, we get to observe the outcomes of the process, Y [n] =S [n], whichwearetryingtoestimate(aftersomedelay,of course) One such application is that of adaptive channel equalization in a digital communications system The context of this application has already been introduced in Example 3 If the channel is approximately equalized and the demodulator is working well, we expect to recover the sequence of information bearing scaling factors, A k,approximatelyinadigitalcommunicationsystem,theseanalogquantities are thresholded to recover a discrete set of transmitted symbols For example,

12 ctaubman, 23 ELEC462: Wiener Filtering Page 2 { b k } scaling factors A = A 2b ) k ( k transmitter { A k } modulation s [ n] = A g[ n ] k k Pk s[n] unknown channel ˆ ( ) h s r[n] { b k } detection b k = ( A < )?: k { A k } resynthesis s [ n] = A( 2b ) g[ n ] k k Pk demodulation s[n] Pk Ak = g[ n] s[ n + Pk] E g n= s[ n L] Wiener filter (equalizer) s [ n] = a r[ n k] k ( n) k s[ n L] a Z L L a Z N r[ n L] e[ n L] 2 a r[ n L ] Z adaptive receiver r[ n L N ] Z Figure 3: Decision feedback equalizer using the LMS algorithm to adapt the Wiener lter coecients in binary ASK (Amplitude Shift Keying), the encoder uses only two amplitude values, A if bk = A k = A if b k = where b k is a single binary digit, which is being communicated using the modulation scheme In this case, the demodulator assumes that b k =was transmitted if it recovers A k > using equation (4), and it decodes b k =if A k Even if our channel equalizer is not working all that well and even if there is adecentamountofnoise,weexpecttodecodethebinarydigitscorrectlywith high probability Error probabilities of in 3 are considered high in most communication environments Since we expect to recover the originally transmitted binary digits with high reliability, we can use this information to reconstruct the originally transmitted waveform, s [n] (after some delay), from which we can recover the cross- and auto-correlation statistics and adapt our Wiener deconvolution lter (the channel equalizing lter) Such a system is known as a decision feedback equalizer, since it relies upon the accuracy of the thresholding decisions which recover the transmitted bits, b k,fromthecorruptedsequenceofamplitudescalingfactors, A k Figure 3 shows a block diagram of a decision feedback equalizer which utilizes the LMS algorithm to adapt the Wiener lter coecients Notice that the demodulation operation in equation (4) requires us to look ahead L samples into the future, since the shaping pulse g [n] has support n L This means that there is a delay of at least L samples from the point at which the

13 ctaubman, 23 ELEC462: Wiener Filtering Page 3 u[n] unwanted background noise h ux h uy y[n] mic x[n] w[n] mic2 q[n] = w [ n] + y[ n] w [ n] + e[ n] Wiener Filter y[n] Figure 4: Active background noise cancellation system Wiener lter provides its estimate, s [n], ofs [n], tothepointatwhichweare able to recover an estimate for s [n] to use in driving the LMS algorithm For this reason, the estimated gradient, J (n),whichweusetoupdatethewiener lter coecients from time step n to time step n +,mustbebasedonthe outcome s [n L] and hence on the prediction error, e [n L] Thegradientis thus estimated as J (n) = 2 This explains the notation used in the gure 33 Active Noise Cancellation e [n L] x [n L] e [n L] x [n L ] e [n L] x [n L N] Active noise cancellation is a classic application for adaptive Wiener ltering Figure 4 depicts a typical noise cancellation environmentthe object is to remove unwanted background noise, y [n], fromthesignal,q [n] =y [n]+w[n] received at the output of a microphone ( mic2 in the gure) Figure 4 suggests a specic example in which the unwanted ambient noise comes from a helicopter s rotors, and the desired signal, w [n], isthevoiceofthehelicopterpilotspeakinginto the microphone In order to cancel out the ambient noise, we introduce a second microphone ( mic in the gure), usually positioned much closer to the source of the noise we wish to cancel than the speaker Writing u [n] for the noise signal at its

14 ctaubman, 23 ELEC462: Wiener Filtering Page 4 source, the noise component received at mic is x [n] =(h ux u)[n] while the noise component received at mic2 is y [n] =(h uy u)[n] where ˆ h ux () and ˆ h uy () are the LTI transfer functions describing the transmission path from the noise source to each of the two microphones It follows that y [n] and x [n] should be connected through ˆy () =ˆx () ˆ h uy () ˆh ux () so we may interpret the role of the Wiener lter in Figure 4 as that of approximating the transfer function, h ˆ uy () / h ˆ ux () In practice, this transfer function is usually a slowly varying function, dependent upon the positions of the microphones and any interfering objects, such as the speaker s head More signicantly, there will invariably be additional sources of noise which ensure that y [n] cannot be expressed as a deterministic function of x [n]for these reasons, we must implement an adaptive Wiener lter, whose role is to provide the best estimate, ȳ [n], ofy [n], usingthecrossand auto-covariance statistics At rst sight, it may appear that we have a problem Since we do not have direct access to y [n], itdoesnotappearthatwewillbeabletocomputethe error signal, e [n] =y [n] ȳ [n] needed to drive the LMS algorithm More generally, it does not appear that we will be able to estimate the cross-correlation sequence, R YX [m], basedonobservationsoftheoutputsfromthetwomicrophones However, we do have access to q [n] and R QX [m] = R YX [m]+r WX [m] R YX [m] The last line follows from the fact that mic is well separated from the speaker, so that X [n] and W [n] are largely uncorrelated Proceeding on the assumption that X [n] and W [n] are uncorrelated, we may drive the LMS algorithm with q [n] ȳ [n] in place of y [n] ȳ [n] Inparticular,thegradientapproximationis (q [n] ȳ [n]) x [n ] J (n) (q [n] ȳ [n]) x [n ] = 2 (q [n] ȳ [n]) x [n N] R EX [] + R WX [] R EX [] + R WX [] 2 = 2 R EX [N]+R WX [N] R EX [] R EX [] R EX [N]

15 ctaubman, 23 ELEC462: Wiener Filtering Page 5 4 Linear Prediction Revisited In this section, we return to the linear prediction problem, in which x [n] is to be predicted from the past N outcomes, x [n ] through x [n N] Ourpurposeis to show that linear prediction has a particularly elegant structure when realized as a lattice lter, and that the reection coecients of the lattice lter may be found directly from the auto-correlation function, R XX [m], withoutneeding to solve the normal equations explicitly through matrix inversion We shall see that each successive output from the upper branch of the lattice predictor is the prediction error from the optimal linear predictor of the corresponding order, and we shall also see that each successive output from the lower branch of the lattice predictor may be interpreted as a backward prediction error In Section 5, we will show that the backward prediction errors are uncorrelated with each other and that they may provide an entirely dierent strategy for implementing and adapting the coecients of Wiener lters The material presented here involves some less obvious mathematical tricks and you are not expected to be able to reproduce these in an exam However, you should pay special attention to the results of the analysis, which are summarized in the preceding paragraph 4 Preliminaries Recall that the N th order linear predictor for x [n] is written x [n] = N a k x [n k] k= and that the coecients which minimize the power of the prediction error, e [n] =x [n] x [n], arethosewhichsatisfythenormalequations, R XX [] R XX [] R XX [N ] R XX [] R XX [] R XX [N 2] R XX [N ] R XX [N 2] R XX [] R :N a a 2 a N = R XX [] R XX [2] R XX [N] Recall also that the LP analysis lter, whose output is the prediction error, e [n], has coecients, n = h [n] = a n <nn This is an N th order FIR lter As suggested above, the lattice lter will turn out to have the property that each lattice stage outputs prediction error sequences corresponding to linear predictors of successively higher orders For this reason, we will modify our notation to keep track of the order, N, oftheanalysislter, h N [n], andofthe

16 ctaubman, 23 ELEC462: Wiener Filtering Page 6 prediction error sequence, e N [n] We will also nd it convenient to reorganize the normal equations as follows First, move the vector on the right hand side into the matrix, to get R XX [] R XX [] R XX [] R XX [N ] R XX [2] R XX [] R XX [] R XX [N 2] R XX [N] R XX [N ] R XX [N 2] R XX [] and then augment the number of equations to get R XX [] R XX [] R XX [2] R XX [N] R XX [] R XX [] R XX [] R XX [N ] R XX [2] R XX [] R XX [] R XX [N 2] R XX [N] R XX [N ] R XX [N 2] R XX [] R :N h N [] = h N [] = a h N [N] =a N h N [] h N [] h N [2] h N [N] () This new set of N + equations, involves the (N +) (N +)Toeplitz autocorrelation matrix In order to add the extra row at the top, we have had to introduce an extra quantity, P N,ontherighthandside Althoughthereare really only N unknowns, h N [] through h N [N], anylter which satises these equations for some P N,havingh N [], isthen th order LP analysis lter Although the quantity, P N,hasbeenintroducedprincipallyforthestructure of the augmented set of normal equations given above, it turns out that this quantity is exactly the power of the N th order prediction error, which is why we have used the notation, P (for power ) To see that P N does indeed have this interpretation, observe that E E 2 N [n] = E E N [n] X [n]+ N h N [k] X [n k] = E [E [n] X [n]] k= where we have used the orthogonality principle, according to which E [n] is orthogonal to (uncorrelated with) X [n ] through X [n N] Expandingthis expression, we get E E 2 N [n] = E X [n] = R XX [] + 42 Backward Prediction X [n]+ N h N [k] X [n k] k= N h N [k] R XX [k] =P N k= The next step in our development of the lattice predictor is to consider backward prediction The N th order backward predictor forms an estimate, XN [n N], = = P N

17 ctaubman, 23 ELEC462: Wiener Filtering Page 7 for X [n N], basedonthen future values, X [n] through X [n N +]Now the statistics of a stationary random process do not change if we reverse the time axis one way to see this is that the auto-correlation function, R XX [m], is symmetric in m Itfollowsthattheoptimalbackwardpredictorhasthesame coecients as the forward predictor, and may be expressed as x N [n N] = N a k x [n N + k] k= Write b N [n] =x [n N] x N [n N] for the backward prediction error, which may then be expressed as b N [n] = N h N [k] x [n N + k] k= N = j=nk h N [N j] x [n j] j= Equivalently, b N [n] may be viewed as the output of an N th order causal FIR lter, g N [n], havingcoecients g N [n] =h N [N n] We conclude that the backward LP analysis lter, g N [n] is the mirror image of h N [n] NotethattheoutputofthebackwardLPanalysislter, b N [n], isthe backward prediction error at time n N We can write down a set of augmented normal equations for backward prediction, by substituting g N [n] =h N [N n] into equation () and reversing the order of all rows and columns in the various matrices and vectors TheToeplitz property of the auto-correlation matrix ensures that reversing the order of all rows and of all columns leaves it unchanged so that we get R XX [] R XX [] R XX [2] R XX [N] R XX [] R XX [] R XX [] R XX [N ] R XX [2] R XX [] R XX [] R XX [N 2] R XX [N] R XX [N ] R XX [N 2] R XX [] R :N g N [] g N [] g N [2] g N [N] = () P N 43 The Levinson-Durbin Recursion We now show that the forward and backward prediction error lters at order N may be used to derive the prediction error lter at order N + in a very simple This just reverses the order in which we are writing down the equations, but nothing else

18 ctaubman, 23 ELEC462: Wiener Filtering Page 8 way To do this, we rst augment the forward and backward augmented normal equations one more time to increase the order to N +Wedothisbyaddinga row and a column (shown in bold) to the end of the auto-correlation matrix in equation (), and introducing a new unknown, N,toaccommodatethenew equation represented by the last row R XX [] R XX [] R XX [N] R XX [N +] R XX [] R XX [] R XX [N ] R XX [N] R XX [N] R XX [N ] R XX [] R XX [] R XX [N +] R XX [N] R XX [] R XX [] R :N+ h N [] h N [] h N [N] (2) For the backward normal equations, we add a new row and new column (shown in bold) to the beginning of the auto-correlation matrix in equation (), which yields R XX [] R XX [] R XX [2] R XX [N +] R XX [] R XX [] R XX [] R XX [N] R XX [2] R XX [] R XX [] R XX [N ] R XX [N +] R XX [N] R XX [N ] R XX [] R :N+ g N [] g N [] g N [N] (3) It is easy to see that the unknowns, N and N must be identical, since we have N = N g N [n] R XX [n +] n= N = k=nn h N [k] R XX [N +k] = N (4) k= = = N P N P N N Finally, we form a new set of equations by adding equation (2) to equation (3) multiplied by k,whichyields h N [] + k N P N + k N N h N [] + k N g N [] R :N+ h N [N]+k N g N [] +k N g N [] = N + k N P N Let h N+ [n] be the N + st order lter whose coecients are given by h N+ [n] =h N [n]+k N g N [n ] (5)

19 ctaubman, 23 ELEC462: Wiener Filtering Page 9 and set k N = N P N,sothat N + k N P N =and write P N+ = P N + k N N = P N k 2 N (6) Then the h N+ [] = and the new lter satises h N+ [] h N+ [] R :N+ = h N+ [N +] P N+ which are the augmented normal equations for the N + st order LP analysis lter Equation (5) gives us a recursive procedure for nding the linear prediction coecients of any order The recursive procedure may be described by the following algorithm, which is known as the Levinson-Durbin algorithm Initialize h [] =, g [] = and P = R XX [] For N =,, 2, Evaluate N using equation (4) Evaluate k N = N P N Find the next order analysis lter coecients from h N+ [n] = h N [n]+k N g N [n ] using h N [n] =for n>n = h N [n]+k N h N [N + n] Find the power of the N + st order prediction error from P N+ = k 2 N PN An important property of the Levinson-Durbin recursion is that the order N + coecients may be derived from the order N coecients using on the order of only 2N multiplications This means that the complete cost of the recursive procedure for deriving the N th order coecients is roughly N 2 multiplications By contrast, the cost of directly inverting the N th order auto-correlation matrix is on the order of N 3 multiplications 44 Lattice Structure for Linear Prediction You should recall from our earlier discussion of FIR lattice lters that the two outputs from the N th stage of an arbitrary lattice lter also correspond to mirror image lters, whose impulse responses we denoted h N [n] and g N [n] Moreover, we the lattice was characterized by exactly the update relationship as that found above as the Levinson-Durbin recursion Specically, H N+ (z) =H N (z)+k N z G N (z)

20 ctaubman, 23 ELEC462: Wiener Filtering Page 2 H 2( z H ) 3 ( z) H ( ) z e [ n ] e [ n] e 2 [ n] e 3[ n ] x[n] k k 2 k 3 b [ n] z - k z - b [ n ] k 2 z - b 2 [ n] k 3 b 3[ n ] G ( z ) G 2 ( z) G 3( z ) Figure 5: Lattice linear predictor, showing the forward and backward prediction errors appearing at the outputs of successive stages where we used the term reection coecients to refer to the k N values The lattice lter structure is repeated here in Figure 5 for your convenience Note that the outputs from the N th lattice stage are the forward and backward prediction errors, respectively According to equation (6), the prediction error power at the output of the N th stage is related to that at the output of the previous stage according to P N = P N k 2 N from which we may conclude that the reection coecients must all have magnitude strictly less than, unlessthesourceprocessisperfectlypredictable (never happens in practice, for obvious reasons) Moreover, by working the Levinson-Durbin algorithm in reverse, it is possible to select an auto-correlation sequence, R XX [m], whoselatticepredictorwillhaveanygivensetofreection coecients, with magnitude less than Itfollowsthattherst N reection coecients, together with the variance of the source, R XX [], provideanalternate description of R XX [m] in the range m N As noted in our discussion of lattice lter, the FIR lattice lter can be shown to have minimum phase if and only if the reection coecients all have magnitude k N <, fromwhichwemaydeducethatthelpanalysislters, H N (z), alwayshaveminimumphaseandarehencearecausallyinvertible The corresponding LP synthesis lters may be implemented by the all-pole lattice structure studied previously 5 Wiener Filtering by Linear Prediction In this section, we show that an N th order Wiener lter may be implemented by applying a simple set of weights to the backward prediction error sequences

21 ctaubman, 23 ELEC462: Wiener Filtering Page 2 produced by the source sequence s lattice predictor In this way, the problem of predicting outcomes of one random process, Y [n], based on inputs from another random process, X [n], is largely solved once we are able to form a linear prediction of X [n] from its own past samples The rst key observation is that the backward prediction error samples produced along the lower branch of the lattice predictor are uncorrelated To see this, recall that b N [n] is the backward prediction error at time nn Fromthe orthogonality principle, then, the B N [n] must be orthogonal to (uncorrelated with) X [n N +]through X [n] Butforeachk>, B Nk [n] = Nk p= g Nk [p] X [n p] is a linear combination of the variables with which B N is orthogonal It follows that B Nk [n] must be orthogonal to (uncorrelated with) B N [n] for all k, sothat the complete set of backward prediction errors, B m [n], aremutuallyorthogonal, for m N The second key observation is that any linear function of x [n] through x [n N] may be expressed as a linear function of the backward prediction errors, b [n] through b N [n] Inparticular,if ȳ [n] = x [n]+ x [n ] + + N x [n N] then ȳ [n] can also be written as ȳ [n] = b [n]+ b [n]+ + N b N [n] where the coecients are related through N = N N = N + g N [] N Nk = Nk + k g Nk+p [p] Nk+p p= We conclude that an N th order Wiener lter may be implemented in the manner depicted in Figure 6 The value of this structure is that the coecients, k,aremuchsimplertond and to adaptively estimate than the original Wiener coecients, k,preciselybecausethebackwardpredictionerrorsareallmutually orthogonal The Wiener minimization objective becomes Y J = E [n] Ȳ [n] 2 2 N = E Y [n] k B k [n] k=

22 ctaubman, 23 ELEC462: Wiener Filtering Page 22 e [ n ] e [ n] e 2 [ n] e 3[ n ] x[n] k k 2 k 3 b [ n ] z - k b [ n ] z - k 2 b 2[ n ] z - k 3 b 3[ n ] 2 3 y[n] Figure 6: Wiener predictor for y [n], implementedasasetofindependentlyadjustable weights, k,onthebackwardpredictionerrorsfromthelatticepredictor for x [n] whose solution satises (by dierentiation or the orthogonality principle) N = E B p [n] Y [n] k B k [n] from which we get = E [B p [n] Y [n]] k= N k E [B p [n] B k [n]] k= = E [B p [n] Y [n]] p E (B p [n]) 2 k = E [B k [n] Y [n]] E (B p [n]) 2 = E [B k [n] Y [n]] P k We already have the prediction error powers, P k,fromthelevinson-durbin recursion, so the task of estimating k is equivalent to that of estimating the correlation between the target output, Y [n], andthek th backward prediction error, B k [n] In an adaptive setting, this can be done using a simple moving window correlator, without resorting to potentially unstable feedback estimators like the LMS algorithm We see, then, that Wiener prediction of Y [n] from X [n] through X [n N] (adaptive or otherwise) is essentially no harder and no easier than linear prediction of X [n] from X [n ] through X [n N] For this reason, there has been considerable focus on adaptive algorithms for robustly and quickly estimating the reection coecients of a lattice predictor, based on observations of the sequence We do not have time to review these algorithms here, but note that they are able to achieve faster convergence, with higher accuracy than the

23 ctaubman, 23 ELEC462: Wiener Filtering Page 23 LMS algorithm introduced previously The interested reader is directed to the Gradient-Adaptive Lattice (GAL) algorithm as an excellent starting point

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Elec461 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Dr. D. S. Taubman May 3, 011 In this last chapter of your notes, we are interested in the problem of nding the instantaneous

More information

Examples. 2-input, 1-output discrete-time systems: 1-input, 1-output discrete-time systems:

Examples. 2-input, 1-output discrete-time systems: 1-input, 1-output discrete-time systems: Discrete-Time s - I Time-Domain Representation CHAPTER 4 These lecture slides are based on "Digital Signal Processing: A Computer-Based Approach, 4th ed." textbook by S.K. Mitra and its instructor materials.

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Multimedia Communications. Differential Coding

Multimedia Communications. Differential Coding Multimedia Communications Differential Coding Differential Coding In many sources, the source output does not change a great deal from one sample to the next. This means that both the dynamic range and

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M: Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl

More information

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet EE538 Final Exam Fall 005 3:0 pm -5:0 pm PHYS 3 Dec. 17, 005 Cover Sheet Test Duration: 10 minutes. Open Book but Closed Notes. Calculators ARE allowed!! This test contains five problems. Each of the five

More information

Linear Optimum Filtering: Statement

Linear Optimum Filtering: Statement Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error

More information

AdaptiveFilters. GJRE-F Classification : FOR Code:

AdaptiveFilters. GJRE-F Classification : FOR Code: Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Adaptive Filtering Part II

Adaptive Filtering Part II Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,

More information

Lecture: Adaptive Filtering

Lecture: Adaptive Filtering ECE 830 Spring 2013 Statistical Signal Processing instructors: K. Jamieson and R. Nowak Lecture: Adaptive Filtering Adaptive filters are commonly used for online filtering of signals. The goal is to estimate

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

26. Filtering. ECE 830, Spring 2014

26. Filtering. ECE 830, Spring 2014 26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem

More information

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach. Wiener filter From Wikipedia, the free encyclopedia In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949. [1] Its purpose is to reduce the

More information

Digital Signal Processing, Homework 1, Spring 2013, Prof. C.D. Chung

Digital Signal Processing, Homework 1, Spring 2013, Prof. C.D. Chung Digital Signal Processing, Homework, Spring 203, Prof. C.D. Chung. (0.5%) Page 99, Problem 2.2 (a) The impulse response h [n] of an LTI system is known to be zero, except in the interval N 0 n N. The input

More information

Massachusetts Institute of Technology

Massachusetts Institute of Technology Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.011: Introduction to Communication, Control and Signal Processing QUIZ, April 1, 010 QUESTION BOOKLET Your

More information

Wiener Filtering. EE264: Lecture 12

Wiener Filtering. EE264: Lecture 12 EE264: Lecture 2 Wiener Filtering In this lecture we will take a different view of filtering. Previously, we have depended on frequency-domain specifications to make some sort of LP/ BP/ HP/ BS filter,

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033

More information

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati Lectures on Dynamic Systems and ontrol Mohammed Dahleh Munther Dahleh George Verghese Department of Electrical Engineering and omputer Science Massachuasetts Institute of Technology c hapter 8 Simulation/Realization

More information

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, 2007 Cover Sheet Test Duration: 120 minutes. Open Book but Closed Notes. Calculators allowed!! This test contains five problems. Each of

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

6.12 System Identification The Easy Case

6.12 System Identification The Easy Case 252 SYSTEMS 612 System Identification The Easy Case Assume that someone brings you a signal processing system enclosed in a black box The box has two connectors, one marked input and the other output Other

More information

Lecture 19 IIR Filters

Lecture 19 IIR Filters Lecture 19 IIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/5/10 1 General IIR Difference Equation IIR system: infinite-impulse response system The most general class

More information

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model 5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n

More information

Linear Convolution Using FFT

Linear Convolution Using FFT Linear Convolution Using FFT Another useful property is that we can perform circular convolution and see how many points remain the same as those of linear convolution. When P < L and an L-point circular

More information

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR SPEECH CODING Petr Polla & Pavel Sova Czech Technical University of Prague CVUT FEL K, 66 7 Praha 6, Czech Republic E-mail: polla@noel.feld.cvut.cz Abstract

More information

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e. Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

Each problem is worth 25 points, and you may solve the problems in any order.

Each problem is worth 25 points, and you may solve the problems in any order. EE 120: Signals & Systems Department of Electrical Engineering and Computer Sciences University of California, Berkeley Midterm Exam #2 April 11, 2016, 2:10-4:00pm Instructions: There are four questions

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 13 Linear Zero Forcing Equalization 0 c 2012, Georgia Institute of Technology (lect13 1) Equalization The cascade of the transmit filter g(t), channel c(t), receiver filter

More information

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur Module Random Processes Version, ECE IIT, Kharagpur Lesson 9 Introduction to Statistical Signal Processing Version, ECE IIT, Kharagpur After reading this lesson, you will learn about Hypotheses testing

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011 UCSD ECE53 Handout #40 Prof. Young-Han Kim Thursday, May 9, 04 Homework Set #8 Due: Thursday, June 5, 0. Discrete-time Wiener process. Let Z n, n 0 be a discrete time white Gaussian noise (WGN) process,

More information

Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion

Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

Chapter [4] "Operations on a Single Random Variable"

Chapter [4] Operations on a Single Random Variable Chapter [4] "Operations on a Single Random Variable" 4.1 Introduction In our study of random variables we use the probability density function or the cumulative distribution function to provide a complete

More information

III.C - Linear Transformations: Optimal Filtering

III.C - Linear Transformations: Optimal Filtering 1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients

More information

7.1 Sampling and Reconstruction

7.1 Sampling and Reconstruction Haberlesme Sistemlerine Giris (ELE 361) 6 Agustos 2017 TOBB Ekonomi ve Teknoloji Universitesi, Guz 2017-18 Dr. A. Melda Yuksel Turgut & Tolga Girici Lecture Notes Chapter 7 Analog to Digital Conversion

More information

HST.582J/6.555J/16.456J

HST.582J/6.555J/16.456J Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear

More information

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of

More information

EE5585 Data Compression April 18, Lecture 23

EE5585 Data Compression April 18, Lecture 23 EE5585 Data Compression April 18, 013 Lecture 3 Instructor: Arya Mazumdar Scribe: Trevor Webster Differential Encoding Suppose we have a signal that is slowly varying For instance, if we were looking at

More information

Lab 4: Quantization, Oversampling, and Noise Shaping

Lab 4: Quantization, Oversampling, and Noise Shaping Lab 4: Quantization, Oversampling, and Noise Shaping Due Friday 04/21/17 Overview: This assignment should be completed with your assigned lab partner(s). Each group must turn in a report composed using

More information

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c = ENEE630 ADSP Part II w/ solution. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix) R a = 4 4 4,R b = 0 0,R c = j 0 j 0 j 0 j 0 j,r d = 0 0 0

More information

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise. Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)

More information

Chirp Transform for FFT

Chirp Transform for FFT Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a

More information

Noise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic Approximation Algorithm

Noise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic Approximation Algorithm EngOpt 2008 - International Conference on Engineering Optimization Rio de Janeiro, Brazil, 0-05 June 2008. Noise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic

More information

LECTURE 5 Noise and ISI

LECTURE 5 Noise and ISI MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: February 22, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 5 Noise and ISI Sometimes theory tells you: Stop

More information

IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT

IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN Jared K. Thomas Brigham Young University Department of Mechanical Engineering ABSTRACT The application of active noise control (ANC)

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

SPEECH ANALYSIS AND SYNTHESIS

SPEECH ANALYSIS AND SYNTHESIS 16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques

More information

ELEN E4810: Digital Signal Processing Topic 2: Time domain

ELEN E4810: Digital Signal Processing Topic 2: Time domain ELEN E4810: Digital Signal Processing Topic 2: Time domain 1. Discrete-time systems 2. Convolution 3. Linear Constant-Coefficient Difference Equations (LCCDEs) 4. Correlation 1 1. Discrete-time systems

More information

Stability Condition in Terms of the Pole Locations

Stability Condition in Terms of the Pole Locations Stability Condition in Terms of the Pole Locations A causal LTI digital filter is BIBO stable if and only if its impulse response h[n] is absolutely summable, i.e., 1 = S h [ n] < n= We now develop a stability

More information

Module 3. Convolution. Aim

Module 3. Convolution. Aim Module Convolution Digital Signal Processing. Slide 4. Aim How to perform convolution in real-time systems efficiently? Is convolution in time domain equivalent to multiplication of the transformed sequence?

More information

Optimal and Adaptive Filtering

Optimal and Adaptive Filtering Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69 Table of Contents 1

More information

UNIVERSITY OF OSLO. Faculty of mathematics and natural sciences. Forslag til fasit, versjon-01: Problem 1 Signals and systems.

UNIVERSITY OF OSLO. Faculty of mathematics and natural sciences. Forslag til fasit, versjon-01: Problem 1 Signals and systems. UNIVERSITY OF OSLO Faculty of mathematics and natural sciences Examination in INF3470/4470 Digital signal processing Day of examination: December 1th, 016 Examination hours: 14:30 18.30 This problem set

More information

Cosc 3451 Signals and Systems. What is a system? Systems Terminology and Properties of Systems

Cosc 3451 Signals and Systems. What is a system? Systems Terminology and Properties of Systems Cosc 3451 Signals and Systems Systems Terminology and Properties of Systems What is a system? an entity that manipulates one or more signals to yield new signals (often to accomplish a function) can be

More information

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables, University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

6.02 Fall 2012 Lecture #10

6.02 Fall 2012 Lecture #10 6.02 Fall 2012 Lecture #10 Linear time-invariant (LTI) models Convolution 6.02 Fall 2012 Lecture 10, Slide #1 Modeling Channel Behavior codeword bits in generate x[n] 1001110101 digitized modulate DAC

More information

Analog vs. discrete signals

Analog vs. discrete signals Analog vs. discrete signals Continuous-time signals are also known as analog signals because their amplitude is analogous (i.e., proportional) to the physical quantity they represent. Discrete-time signals

More information

Time Series Analysis: 4. Digital Linear Filters. P. F. Góra

Time Series Analysis: 4. Digital Linear Filters. P. F. Góra Time Series Analysis: 4. Digital Linear Filters P. F. Góra http://th-www.if.uj.edu.pl/zfs/gora/ 2018 Linear filters Filtering in Fourier domain is very easy: multiply the DFT of the input by a transfer

More information

Let H(z) = P(z)/Q(z) be the system function of a rational form. Let us represent both P(z) and Q(z) as polynomials of z (not z -1 )

Let H(z) = P(z)/Q(z) be the system function of a rational form. Let us represent both P(z) and Q(z) as polynomials of z (not z -1 ) Review: Poles and Zeros of Fractional Form Let H() = P()/Q() be the system function of a rational form. Let us represent both P() and Q() as polynomials of (not - ) Then Poles: the roots of Q()=0 Zeros:

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

Computer Engineering 4TL4: Digital Signal Processing

Computer Engineering 4TL4: Digital Signal Processing Computer Engineering 4TL4: Digital Signal Processing Day Class Instructor: Dr. I. C. BRUCE Duration of Examination: 3 Hours McMaster University Final Examination December, 2003 This examination paper includes

More information

VU Signal and Image Processing

VU Signal and Image Processing 052600 VU Signal and Image Processing Torsten Möller + Hrvoje Bogunović + Raphael Sahann torsten.moeller@univie.ac.at hrvoje.bogunovic@meduniwien.ac.at raphael.sahann@univie.ac.at vda.cs.univie.ac.at/teaching/sip/18s/

More information

University of California Department of Mechanical Engineering ECE230A/ME243A Linear Systems Fall 1999 (B. Bamieh ) Lecture 3: Simulation/Realization 1

University of California Department of Mechanical Engineering ECE230A/ME243A Linear Systems Fall 1999 (B. Bamieh ) Lecture 3: Simulation/Realization 1 University of alifornia Department of Mechanical Engineering EE/ME Linear Systems Fall 999 ( amieh ) Lecture : Simulation/Realization Given an nthorder statespace description of the form _x(t) f (x(t)

More information

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm D.R. Brown III WPI WPI D.R. Brown III 1 / 19 Problem Statement and Assumptions known input x[n] unknown system (assumed

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2.0 THEOREM OF WIENER- KHINTCHINE An important technique in the study of deterministic signals consists in using harmonic functions to gain the spectral

More information

Module 1: Signals & System

Module 1: Signals & System Module 1: Signals & System Lecture 6: Basic Signals in Detail Basic Signals in detail We now introduce formally some of the basic signals namely 1) The Unit Impulse function. 2) The Unit Step function

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches

How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches Difference Equations (an LTI system) x[n]: input, y[n]: output That is, building a system that maes use of the current and previous

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

Time-domain representations

Time-domain representations Time-domain representations Speech Processing Tom Bäckström Aalto University Fall 2016 Basics of Signal Processing in the Time-domain Time-domain signals Before we can describe speech signals or modelling

More information

Lab 9a. Linear Predictive Coding for Speech Processing

Lab 9a. Linear Predictive Coding for Speech Processing EE275Lab October 27, 2007 Lab 9a. Linear Predictive Coding for Speech Processing Pitch Period Impulse Train Generator Voiced/Unvoiced Speech Switch Vocal Tract Parameters Time-Varying Digital Filter H(z)

More information

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology RADIO SYSTEMS ETIN15 Lecture no: 8 Equalization Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se Contents Inter-symbol interference Linear equalizers Decision-feedback

More information

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE 53 Handout #46 Prof. Young-Han Kim Thursday, June 5, 04 Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei). Discrete-time Wiener process. Let Z n, n 0 be a discrete time white

More information

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007 MIT OpenCourseWare http://ocw.mit.edu HST.58J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Chapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析

Chapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 Chapter 9 Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 1 LPC Methods LPC methods are the most widely used in speech coding, speech synthesis, speech recognition, speaker recognition and verification

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Transform Representation of Signals

Transform Representation of Signals C H A P T E R 3 Transform Representation of Signals and LTI Systems As you have seen in your prior studies of signals and systems, and as emphasized in the review in Chapter 2, transforms play a central

More information

CSE468 Information Conflict

CSE468 Information Conflict CSE468 Information Conflict Lecturer: Dr Carlo Kopp, MIEEE, MAIAA, PEng Lecture 02 Introduction to Information Theory Concepts Reference Sources and Bibliography There is an abundance of websites and publications

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

Time Series Analysis: 4. Linear filters. P. F. Góra

Time Series Analysis: 4. Linear filters. P. F. Góra Time Series Analysis: 4. Linear filters P. F. Góra http://th-www.if.uj.edu.pl/zfs/gora/ 2012 Linear filters in the Fourier domain Filtering: Multiplying the transform by a transfer function. g n DFT G

More information

Wiener Filter for Deterministic Blur Model

Wiener Filter for Deterministic Blur Model Wiener Filter for Deterministic Blur Model Based on Ch. 5 of Gonzalez & Woods, Digital Image Processing, nd Ed., Addison-Wesley, 00 One common application of the Wiener filter has been in the area of simultaneous

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical

More information

Notes 22 largely plagiarized by %khc

Notes 22 largely plagiarized by %khc Notes 22 largely plagiarized by %khc LTv. ZT Using the conformal map z e st, we can transfer our knowledge of the ROC of the bilateral laplace transform to the ROC of the bilateral z transform. Laplace

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Responses of Digital Filters Chapter Intended Learning Outcomes:

Responses of Digital Filters Chapter Intended Learning Outcomes: Responses of Digital Filters Chapter Intended Learning Outcomes: (i) Understanding the relationships between impulse response, frequency response, difference equation and transfer function in characterizing

More information