SVD-based optimal ltering with applications to noise reduction in speech signals Simon Doclo ESAT - SISTA, Katholieke Universiteit Leuven Kardinaal Me

Size: px
Start display at page:

Download "SVD-based optimal ltering with applications to noise reduction in speech signals Simon Doclo ESAT - SISTA, Katholieke Universiteit Leuven Kardinaal Me"

Transcription

1 Departement Elektrotechniek ESAT-SISTA/TR 999- SVD-based Optimal Filtering with Applications to Noise Reduction in Speech Signals Simon Doclo, Marc Moonen April, 999 Internal report This report is available by anonymous ftp from ftpesatkuleuvenacbe in the directory pub/sista/doclo/reports/99-psgz ESAT (SISTA) - Katholieke Universiteit Leuven, Kardinaal Mercierlaan 9, Leuven (Heverlee), Belgium, Tel //899, Fax //9, WWW: simondoclo@esatkuleuvenacbe Simon Doclo is a Research Assistant supported by the IWT (Flemish Institute for Scientic and Technological Research in Industry) Marc Moonen is a Research Associate with the FWO - Vlaanderen (Fund for Scientic Research - Flanders) This research work was carried out at the ESAT laboratory of the Katholieke Universiteit Leuven, in the framework of the FWO Research Project nr G99, Design and implementation of adaptive digital signal processing algorithms for broadband applications, the Interuniversity Attraction Pole IUAP P- (99- ), Modeling, Identication, Simulation and Control of Complex Systems, initiated by the Belgian State, Prime Minister's Oce - Federal Oce for Scientic, Technical and Cultural Aairs and the IT-project Multimicrophone Signal Enhancement Techniques for handsfree telephony and voice controlled systems (MUSETTE) (AUT/9/Philips ITCL) of the IWT (Flemish Institute for Scientic and Technological Research in Industry) and was partially sponsored by Philips-ITCL The scientic responsibility is assumed by its authors

2 SVD-based optimal ltering with applications to noise reduction in speech signals Simon Doclo ESAT - SISTA, Katholieke Universiteit Leuven Kardinaal Mercierlaan 9, Leuven, Belgium simondoclo@esatkuleuvenacbe Marc Moonen ESAT - SISTA, Katholieke Universiteit Leuven Kardinaal Mercierlaan 9, Leuven, Belgium marcmoonen@esatkuleuvenacbe April, 999

3 Abstract In this report, a compact review is given of a class of SVD-based signal enhancement procedures, which amount to a specic optimal ltering technique for the case where the so-called `desired response' signal cannot be observed A number of simple properties (eg symmetry properties) of the obtained estimators are derived, which to our knowledge have not been published before and which are valid for the white noise case as well as for the coloured noise case Also a standard procedure based on averaging is investigated, leading to serious doubts about the necessity of the averaging step When applying this technique to multi-microphone noise reduction, the optimal lter exhibits a kind of beamforming behaviour for highly correlated noise sources When comparing this technique to standard beamforming algorithms, its performance is equally good for highly correlated noise sources For less correlated noise sources { a situation where standard beamforming typically fails { it is shown that its performance is better than standard beamforming techniques Finally it is shown by simulations that this technique is more robust to environmental changes, such as source movement, microphone displacement and microphone amplication than standard beamforming techniques

4 Contents Introduction SVD-based optimal ltering Preliminaries SVD-based ltering Error covariance matrix White noise case 8 Time series ltering 9 Time series ltering and averaging Multichannel time series ltering 8 Conclusion Beamforming behaviour of multichannel ltering Preliminaries Spatio-temporal white noise 8 Broadband source 9 Smallband source Localized noise source 8 Real-world situation Comparison to standard beamforming algorithms Standard beamforming algorithms General conguration Comparison Dependence on noiseframe Robustness issues Source movement Microphone displacement Microphone amplication 8 Conclusion 9 Conclusion Acknowledgments A Derivative to vectors and matrices A Derivative to vectors A Derivative to matrices B Eigenvectors of symmetric Toeplitz and block-toeplitz matrices B Structured matrices B Symmetry properties of eigenvectors 8

5 Introduction In many speech communication applications, like audio-conferencing and hands-free mobile telephony, the recorded and transmitted speech signals contain a considerable amount of acoustic noise This is mainly due to the fact that the speaker is located at a certain distance from the recording microphones, which allows the microphones to record the noise sources too Background noise can stem from stationary noise sources like a fan, but most of the time the background noise is non-stationary and broadband, with a spectral density depending upon the environment The background noise causes a signal degradation which can lead to total unintelligibility of the speech and which decreases the performance of speech coding and speech recognition systems Therefore ecient noise reduction algorithms are called for During the last years some techniques for noise reduction in speech have been proposed which are based on the singular value decomposition (SVD) [][][] Most of these techniques only deal with the one-microphone case and therefore have to rely on signal specic characteristics Speech signals can be assumed to consist of several formants The interpretation which is given to most one-microphone SVD-based noise reduction techniques, is that these techniques will try to extract the most important formants from the noisy speech signal [], thereby reducing the amount of noise When using a microphone array, the spatial conguration of the speech/noise sources and the microphone array constitutes an important aspect which should not be neglected Therefore multi-microphone algorithms should not only exploit signal characteristics, but should also exploit the characteristics of the channel between the speech/noise sources and the microphone array Although the SVD-based multimicrophone extensions which have been proposed [] exploit the signal characteristics in a more robust way, they still don't exploit channel characteristics Section describes a class of SVD-based signal enhancement procedures, which amount to a specic optimal ltering technique for the case where the so-called `desired response' signal cannot be observed It is shown that this optimal lter can be written as a function of the generalized singular vectors and singular values of a so-called speech and noise data matrix A number of simple symmetry properties of the optimal lter are derived, which are valid for the white noise case as well as for the coloured noise case Also the averaging step of the standard one-microphone SVD-based noise reduction techniques [][] is investigated, leading to serious doubts about the necessity of this averaging step When applying the SVD-based optimal ltering technique for multiple channels, a number of additional symmetry properties can be derived, depending on the structure of the noise covariance matrix In Section the SVD-based optimal ltering technique is applied to multi-microphone noise reduction in speech For some contrived examples it is shown that this technique exhibits a kind of beamforming behaviour When considering spatio-temporal white noise on all microphones, it is shown by simulations that the directivity pattern of the SVD-based optimal lter is focused towards the speech source When considering a localized noise source (and no multipath propagation), it is shown that a zero is steered towards this noise source Section further compares the performance of the SVD-based optimal ltering technique with standard beamforming algorithms [] (delay-and-sum, Griths-Jim and

6 Generalized Sidelobe Canceller (GSC) [][8][9][]) Adaptive Griths-Jim beamformers perform particularly well when the noise on the dierent microphones is highly correlated When the noise is less correlated, the performance of these beamformers drops considerably It is shown by simulations that for highly correlated noise sources the SVD-based optimal ltering technique performs equally well as adaptive Griths-Jim beamformers, and that for less correlated noise sources it also continues to perform better In this section the dependence of the performance of the SVD-based optimal ltering technique on the length and the starting point of the noiseframe is also investigated Section discusses the issue of robustness It is known that standard beamforming algorithms are rather sensitive to incorrect estimation of source direction and uncalibrated microphone arrays It is shown by simulations that the SVD-based optimal ltering technique is more robust to environmental changes, such as source movement, microphone displacement and microphone amplication than standard beamforming techniques

7 SVD-based optimal ltering Preliminaries Consider the following ltering problem (gure ) : u k R N is the lter input vector at time k, y k is the lter output at time k, y k = u T k w = w T u k ; () d k is the desired lter output (`desired response') at time k, e k is the error at time k, e k = d k? y k ; () and w R N is the optimal lter All signals are supposed to be real-valued d k + y u k - k w Σ e k Figure : Optimal ltering problem with desired response d k The MSE (mean square error) cost function for optimal ltering is J MSE (w) = Efe k g = Ef(d k? y k ) g = Ef(d k? w T u k ) g = Efd k g? wt E fu k d k g + w T E u k u T k w () The optimal lter is found by setting the equal to zero Using the expressions from Appendix A, we obtain the Wiener-Hopf equations [] =?E fu k d k g + E u k u T k w = : () The optimal lter w W F is the well-known Wiener lter : w W F = E u k u T k? E fu k d k g () It is also possible to consider multiple right-hand side problems, ie work with a desired vector signal d k R N instead of a scalar d k (gure ) The lter output vector y k R N is obtained as : yk T = u T k W; () with W R N N the optimal lter The i-th column of W is then an optimal lter for the i-th component of d k

8 d k u k W y k - + Σ e k Figure : Optimal ltering problem with desired response vector d k The corresponding formulae are : J MSE (W) = Efke k k g = Efkd k? y k k g = Efkd k? W T u k k g = Ef(d k? W T u k ) T (d k? W T u k )g = Efd T k d k g? Efu T k Wd k g + Efu T k WW T u k g () The optimal lter is found by setting the equal to zero Using the expressions from Appendix A, we =?E u k d T k + E u k u T k W = : (8) The optimal Wiener lter W W F is : W W F = E u k u T k? E u k d T k (9) If E u k u T k and E u k d T k are known, the problem is solved conceptually In the following, we consider problems where only observations of u k are available, and the observed signal u k contains a signal-of-interest s k (eg a speech signal) plus additive noise n k, u k = s k + n k () If we consider speech applications and use a robust speech/noise detection algorithm [][], noise-only observations can be made during speech pauses (time k ), u k = + n k () which allows to estimate the spatial and temporal colour of the noise Our goal is to reconstruct the signal-of-interest s k (during speech activity) from u k by means of a linear lter W In the optimal lter context this means that the desired signal d k is in fact equal to the signal-of-interest s k, d k = s k () but that now the desired signal d k is an unobservable signal The optimal solution (Wiener lter) is still given by W W F = E u k u T k but obtaining an estimate for E u k s T k? E u k s T k () is not straightforward

9 SVD-based ltering If we assume that we observe u k = n k during speech pauses, then we can use such observations to estimate If we assume (short-term) noise stationarity, we have which means that we are able to estimate E n k n T k E n k n T k = E u k u T k : () E n k n T k = E n k n T k () During speech activity, we observe both the signal-of-interest and the noise signal, u k = s k + n k () and we can use such observations to estimate E u k u T k If we assume that s k and n k are statistically independent (E s k n T k = ), then E u k u T k = E s k s T k + E s k n T k + E n k s T k + E n k n T k = E s k s T k + E n k n T k () Given E u k u T k and E n k n T k, we can thus compute E s k s T k Finally, from the assumed independence of s k and n k it also follows that E u k s T k = E s k s T k + E s k n T k = E s k s T k ; (8) so that the the optimal lter W W F is given by W W F = E u k u T k? E s k s T k =E u k u T k? (E u k u T k?e n k n T k (9) ) PS : Note that if the desired response vector d k were n k instead of s k, then the optimal estimator W n W F for n k would be W W n F = E u k u T? k E u k n T k = E u k u T? k E n k n T k = I? W W F : () This means that an optimal estimate for n k is obtained by subtracting the optimal estimate for s k from u k, and vice versa PS : Note that if the additive noise is zero (E n k n T k = ), then W W F = I An interesting and useful simplication in formula (9) for W W F is derived from the joint diagonalization (generalized eigenvalue decomposition) [] of the symmetric matrices E u k u T k and E n k n T k, E uk u T k = X diagf i g XT E n k n T k = X diagf i g XT ()

10 with X an invertible, but not necessarily orthogonal, matrix Note that diagf i g represents a diagonal matrix with diagonal elements i, i = : : : N, and that diagf i g is similarly dened In practice, X, i and i are computed by means of a generalized singular value decomposition of the data matrices U k R pn and N k R qn (with p and q typically larger than N), U k = u T k u T k+ N k = n T k n T k+ () u T k+p? n T k+q? such that E u k u T k ' U T U k k and E n k n T k ' N T N k k The generalized singular value decomposition of the matrices U k and N k is dened as Uk = U diagf i g X T N k = V diagf i g X T ; () with U R pn and V R qn orthogonal matrices, X R N N an invertible matrix and i the generalized singular values i By substituting the above formulas into formula (9), one obtains W W F = X?T diagf i? i i g X T () In fact, the lter W W F belongs to a more general class of estimators, which can be described by This formula can be interpreted as follows : W = X?T diagff( i ; i )g X T : () X?T is an analysis lterbank which performs a transformation from the time domain to a transform domain f(i ; i ) is a function which modies the transform domain parameters X T is a synthesis lterbank which performs a transformation from the transform domain back to the time domain By using the function f(i ; i ) = i? i and using the generalized eigenvectors X, i one obtains the optimal lter dened in formula () Error covariance matrix The estimation error e k is dened as e k = s k? y k = s k? W T W Fu k ()

11 The expected value of the estimation error is Efe k g = Efs k? W T W Fu k g = (I? W T W F ) Efs k g? W T W F Efn kg () which is zero if Efs k g = and W W F = or if Efn k g = and W W F = I The error covariance matrix is computed as Efe k e T k g = Ef(s k? W T W F u k ) (s k? W T W Fu k ) T g = E s k s T k? W T W F E u k s T k? E u k s T k W W F + W T W F E u k u T k W W F () = E s k s T k? W T W F E u k s T k? E u k s T k W W F + W T W F E u k s T k (8) = E s k s T k? E s k s T k W W F () = (E u k u T k? E n k n T k ) (I? W W F ) (9) = E u k u T k? (E u k u T k? E n k n T k )? E n k n T k + E n k n T k W W F = E n k n T k W W F (8) A similar formula is obtained in [] In particular, we are interested in the diagonal elements of the error covariance matrix fe n k n T k W W F g ii, since these elements indicate how well fs k g i (the i th component of s k ) is estimated White noise case In the white noise case, we have E n k n T k = I; (9) with the power of the white noise process Obviously this simplies the formulas considerably The joint diagonalization reduces to an eigenvalue decomposition of the form ( E uk u T k = X diagf i g XT E n k n T k = I = X X T ; () hence with X an orthogonal matrix By using X?T lter becomes = X and i =, the optimal W W F = X diagf i? g X T () i Often the noise power can be estimated from the smallest singular values of E u k u T k (eg after assuming a low rank model for E s k s T k, which is approximately valid for speech signals []) This means that speech detection is no longer necessary, and that the method also applies to non-speech applications In the white noise case, the error covariance matrix Efe k e T k g reduces to Efe k e T k g = Ef(s k? W T W Fu k ) (s k? W T W Fu k ) T g = I () 8

12 PS : In the white noise case, from the orthogonality of X, it follows that every diagonal element in W W F is limited between and : fw W F g ii = X(i; :) diagf j? = NX j= NX j= j j? X(i; j) j g X(i; :) T X(i; j) () (since j ) () which means that the estimate for fs k g i contains a contribution fu k g i with, and that = in the noiseless case ( = ) PS : In the white noise case, W W F is a symmetric matrix This means that if the estimate for fs k g i contains a contribution fu k g j, then the estimate for fs k g j contains a contribution fu k g i (`reciprocity') Time series ltering Let us now assume the vector u k is taken from a time series u(k), ie u k = u(k) u(k? ) u(k? ) : : : u(k? N + ) T () and similarly s k = s(k) s(k? ) s(k? ) : : : s(k? N + ) T : () The data matrices U k R pn and N k R qn, as dened in equation (), now are Toeplitz matrices, eg U k = u T k u T k+ u k+ u T k+p? = u(k) u(k? ) u(k? ) : : : u(k? N + ) u(k + ) u(k) u(k? ) : : : u(k? N + ) u(k + ) u(k + ) u(k) : : : u(k? N + ) u(k + p? ) u(k + p? ) u(k + p? ) : : : u(k + p? N) () For wide-sense stationary (WSS) processes s(k), the autocorrelation function is only dependent on the time dierence, and is a symmetric function, () = Efs(k) s(k? )g (8) () = (?); (9) 9

13 such that the correlation matrices E u k u T k and E s k s T k are symmetric Toeplitz matrices, eg E s k s T k = () () () : : : (N? ) () () () : : : (N? ) () () () : : : (N? ) (N? ) (N? ) (N? ) : : : () : () Symmetric Toeplitz matrices belong to the class of double symmetric matrices, which are symmetric about both the main diagonal and the secondary diagonal The eigenvectors of such matrices are known to have special symmetry properties [][8] For specic notation and properties, we refer to appendix B Theorem If the lter W W F is constructed according to equations (9)(), then W W F satises W W F = J W W F J () WW T F = J WW T F J () with J a matrix with all ones along its secondary diagonal and zeros everywhere else (reversal matrix as dened in equation (B)) These properties hold in the white noise case as well as in the coloured noise case Proof : Since E u k u T k and E n k n T k are symmetric Toeplitz, they satisfy E u k u T k = J E u k u T k J () E n k n T k = J E n k n T k J () According to lemma and in appendix B, it follows that E u k u T k? E u k u T k? = J E u k u T k E n k n T k = J E u k u T k?? J () E n k n T k J () The optimal lter W W F, dened in equation (9) is W W F = I? E u k u T k? E n k n T k () From this, it follows that J W W F J = J (I? E u k u T k = I? E u k u T k According to lemma in appendix B, it follows that?? E n k n T k ) J E n k n T k = W W F (8) J W T W F J = WT W F (9)

14 The properties J W W F J = W W F and J WW T F J = WT W F mean that the ith row/column of W W F is equal to the (N +?i) th row/column in reverse order In the white noise case W W F is a symmetric matrix From the property JW W F J = W W F it then follows that W W F is a double-symmetric matrix in the white noise case Theorem If the lter W W F ned in equation (), belongs to the more general class of estimators, de- W W F = X?T diagff( i ; i )g X T ; () the properties of equation () still hold, in the white noise case as well as in the coloured noise case Proof : The joint diagonalization of E u k u T k and E n k n T k, as dened in equation (), is E uk u T k = X diagf i g XT E n k n T k = X diagf i g XT () Therefore E u k u T k? E n k n T k = X?T diagf i g X T ; () i is the eigenvector decomposition of E u k u T? k E n k n T k, with X an invertible, but not necessarily orthogonal matrix (only in the white noise case) Because E u k u T k? E n k n T k = J E u k u T k? E n k n T k J () the eigenvectors (columns of X?T ) are known to have symmetry properties, in particular (see equation (B)) With this, one obtains J X?T = X?T diagfg: () J W W F J = J X?T diagff( i ; i )g X T J = X?T diagfg diagff( i ; i )g diagfg X T = X?T diagff( i ; i )g X T = W W F () Rank truncation, for instance, is the basis for a popular estimation procedure in the white noise case [], where f( i ; ) = if i = if i ()

15 If we consider only the rst generalized eigenvector, corresponding to the maximum generalized eigenvalue, ie f( i ; i ) = if i = = otherwise () then the estimate ^s k = W T W F u k will have maximal signal-to-noise ratio (SNR) [9], but the signal will be distorted (for some applications this distortion can however be tolerated) This means that the optimal lter W W F as dened in equation () with f(i ; i ) = i? i will not produce maximal signal-to-noise ratio Instead, by i minimizing mean squared error (MSE), this lter will also take into account signal distortion Note that an estimate ^s k for s k is obtained as ^s k = ^s(k) ^s(k? ) ^s(k? ) ^s(k? N + ) = WT W F We will use a more explicit notation as follows ^s k:k?n + (k) ^s k:k?n + (k? ) ^s k:k?n + (k? ) ^s k:k?n + (k? N + ) = WT W F u(k) u(k? ) u(k? ) u(k? N + ) u(k) u(k? ) u(k? ) u(k? N + ) (8) (9) where ^s k:k?n + (l) means that an estimate for s(l) is obtained as a linear combination of u(k), u(k? ), : : :, u(k? N + ) For N odd, the middle row in WW T F produces the estimate ^s k:k?n + (k? N? N? ), where s(k? ) is estimated from u(k? N? ) together with N? earlier samples and N? later samples of u The property J WW T F J = WT W F then indicates that for N odd, the middle row in WW T F is symmetric, and hence represents a linear phase lter Note that a zero phase property has been attributed to an SVD and rank truncation based estimator for the white noise case, if an additional averaging step (see also section ) is included [] For the colored noise case [][][], a similar linear phase property had apparently not been derived yet

16 Time series ltering and averaging From it follows that ^s k:k?n + (k) ^s k:k?n + (k? ) ^s k:k?n + (k? ) ^s k:k?n + (k? N + ) = WT W F u k () ^s k:k?n + (k) ^s k+:k?n + (k + ) : : : ^s k+n?:k (k + N? ) ^s k:k?n + (k? ) ^s k+:k?n + (k) : : : ^s k+n?:k (k + N? ) ^s k:k?n + (k? ) ^s k+:k?n + (k? ) : : : ^s k+n?:k (k + N? ) ^s k:k?n + (k? N + ) ^s k+:k?n + (k? N + ) : : : ^s k+n?:k (k) = W T W F u k u k+ : : : u k+n? () It is seen that several (maximum N) estimates are obtained for one and the same sample s(l) As an example, N estimates for s(k) are available on the main diagonal If w(i; j) denotes the (i; j)-element of W W F, one can obtain an explicit formula for all these estimates together : with W T W F = ^s k:k?n + (k) ^s k+:k?n + (k) ^s k+n?:k? (k) ^s k+n?:k (k) = WT W F u(k + N? ) u(k + N? ) u(k + ) u(k) u(k? N + ) u(k? N + ) () : : : w(; ) : : : w(n? ; ) w(n; ) : : : w(; ) w(; ) : : : w(n; ) w(; N? ) : : : w(n? ; N? ) w(n? ; N? ) : : : w(; N ) w(; N ) : : : w(n? ; N ) w(n; N ) : : : () From J W T W F J = WT W F it immediately follows that W T W F = J W T W F J () The question now arises which estimate, out of the N available estimates for s(k), is the best The answer is given by the error covariance matrix (see section )

17 Efe k e T k g = Efn k n T k g W W F The smallest element on the main diagonal of the error covariance matrix corresponds to the best estimator From here on the best estimator, which is the corresponding row of WW T F, will be denoted as wmin W F The question remains if perhaps an even better estimate for s(k) can be obtained by linearly combining the N available estimates This question is apparently not easily answered An obvious choice could be to average over all available estimates, a technique which is often applied to rank truncation based estimation [][][][][], ~s k+n?:k?n + (k) = = N N : : : N N : : : N N N N {z } ~w W T W F ^s k:k?n + (k) ^s k+:k?n + (k) ^s k+n?:k? (k) ^s k+n?:k (k) u(k + N? ) u(k + N? ) u(k + ) u(k) u(k? N + ) u(k? N + ) () Here ~s k+n?:k?n + (k) is estimated from u(k) together with (N? ) earlier samples and (N? ) later samples The (N? )-taps lter ~w is obtained by averaging over the available N-taps lters WW T F (i; :) From the symmetry property of W W F it is readily seen that ~w is symmetric, and hence represents a linear phase lter A crucial question is whether the (N? )-taps estimator ~w is better than the individual N- taps estimators WW T F (i; :) it is computed from Specically, ~w should be compared with the symmetric middle row of WW T F (if N is odd), which represents a linear phase lter that uses N? earlier samples and N? later samples First, it can be veried that ~w is not an optimal lter, ie ~s k+n?:k?n + (k) = ^s k+n?:k?n + (k) () Note that ~s k+n?:k?n + (k) corresponds to a linear-phase (N? )-taps estimator ~w, obtained by averaging over a collection of N-taps estimators On the other hand, ^s k+n?:k?n + (k) corresponds to an linear-phase (N? )-taps estimator ^w, which is obtained by applying the usual Wiener lter formulas to a (N? )-dimensional vector u k So, in general, ^w is a function of (), (), : : :, (N? ), where ~w will only be a function of (), (), : : :, (N? ) This means that ^s k+n?:k?n + (k) and ~s k+n?:k?n + (k) are not the same, except for contrived examples The following example further illustrates this

18 Example : As an example, consider a white noise case with i for all i Then W W F = E u k u T k =? where the correlation matrix E s k s T k matrix W W F then has the form WW T F = E s k s T k ' E s k s T k () () : : : (N? ) () () : : : (N? ) (N? ) (N? ) : : : () is a symmetric Toeplitz matrix The : : : () : : : (N? ) (N? ) : : : () () : : : (N? ) (N? ) : : : () () : : : (N? ) (N? ) : : : () () : : : It is readily veried that the (N? )-taps estimator ~w, obtained through averaging, is ~w ' N (N? ) N (N? ) N () N N (N? ) (N? ) N whereas the corresponding (N? )-taps optimal linear-phase lter ^w (middle row of WW T F applied to a (N? )-dimensional vector) is ^w =' (N? ) (N? ) () (N? ) (N? ) : Secondly, simulations indicate that the obtained error variance for the (N? )-taps estimator ~w is mostly comparable to the error variances for the original N-taps estimators WW T F (i; :), and always larger than the error variance for the best N-taps estimator ww min F Simulation example : Consider two (stationary) unit-variance white noise processes s(k) and n(k), k = : : : L The input signal u(k) is constructed as the sum of the useful signal s(k) and the noise n(k), u(k) = s(k) + n(k); k = : : : L () with the power of the noise process, which is assumed to be known The correlation matrix Efu u T g R N N is computed as Efu u T g = L UT U (8) with L the length of the signals and U R LN the data matrix dened as in equation () Since the noise is white, the noise correlation matrix Efn n T g R N N is Efn n T g = I: (9)

19 Both the optimal lter W W F, which consists of N N-taps estimators WW T F (i; :), and the (N? )-taps estimator ~w, obtained through averaging, are computed from these correlation matrices Also ^s(k) = U W W F, which consists of N estimates ^s i (k) = U W W F (:; i), and ~s(k), obtained through averaging, are computed The error variances ^ i ; i = : : : N, and ~ are dened as ^ i = L L X k= (s(k)? ^s i (k)) ; i = : : : N () ~ = L L X k= (s(k)? ~s(k)) () For N = 9 and L =, the error variances ^ i ; i = : : : N, and ~ are compared for two dierent noise powers ( = : and = ) in gure As can be seen from the simulations, the (N? )-taps estimator ~w is not always better than the individual N-taps estimators W W F (:; i) it is computed from Moreover, there always seems to exist N-taps estimators W W F (:; i) which give rise to a lower error variance than the (N? )-taps estimator ~w 8 Error variance comparison (N=9, L=, η =) Error variance comparison (N=9, L=, η =) Optimal σ i Averaged σ Optimal σ i Averaged σ Error variance Error variance i 8 9 i Figure : Error variance comparison between (N? )-taps estimator ~w and original N-taps estimators W W F (:; i) for dierent noise powers Hence averaging does not seem to be a well-founded operation, while on the other hand it certainly increases computational complexity, since it requires (N? )- taps ltering instead of N-taps ltering If minimal error variance is sought for, we therefore suggest to pick the N-tap estimator ww min F corresponding to the smallest diagonal element in the error covariance matrix If the linear phase property is desirable, we suggest to pick the N-tap estimator given by the middle row of WW T F (for N odd)

20 Multichannel time series ltering Consider M channels where each channel m j (k); j = : : : M, consists of a ltered version of the desired signal s(k) and an additive noise term n j (k), m j (k) = h j (k) s(k) + n j (k); () with h j (k) the lter for the j th channel This situation arises eg when we have a microphone array recording both a desired signal and background noise in a room, as depicted in gure signal reflection noise source SVD-based direct path signal source multi-microphone signal enhancement microphone array interference noise source Figure : Microphone array recording desired signal and background noise The vector u k R MN now takes the form u k = m k m k m Mk ; () with m jk = m j (k) m j (k? ) : : : m j (k? N + ) T ; () The vectors s k and n k are similarly dened The data matrix U k R pmn as dened in equation () then takes the form U k = U k U k : : : U Mk ; () with U jk = m j (k) m j (k? ) m j (k? ) : : : m j (k? N + ) m j (k + ) m j (k) m j (k? ) : : : m j (k? N + ) m j (k + ) m j (k + ) m j (k) : : : m j (k? N + ) m j (k + p? ) m j (k + p? ) m j (k + p? ) : : : m j (k + p? N) ()

21 Using the same formulas as for the one-channel case, the optimal lter W W F best (MN)-taps estimator ww min F can be computed and For stationary signals we can use the same correlation matrices for all samples Therefore the estimated signal ^s(k) can be computed as ^s(k) = ^s(k) ^s(k + ) ^s(k + p? ) = U k U k : : : U Mk w min T W F () This lter operation can be considered as a multichannel lter, where each of the M channels is ltered with an N-taps lter A j, where This is depicted in gure w min W F = A A : : : A M : (8) e + Σ A A A A m m m m Noise Desired signal Microphone array Figure : Multichannel ltering If we consider no multipath eects, ie the lter h j (k) = for each channel j, then we can prove some additional symmetry properties for the optimal lter W W F In this case the desired signal in each channel is s(k), m j (k) = s(k) + n j (k): (9) In the following we will only consider symmetry properties for the -channel case However these properties can be easily extended for more than channels For channels, we have the following data model, m (k) = s(k) + n (k) m (k) = s(k) + n (k); such that the vectors u k and n k can be written as mk sk u k = = m k s k and n k = nk n k + nk n k (8) (8) (8) 8

22 Consider the following notations for the correlation matrices : uu = E u k u T k (8) nn = E n k n T k (8) s = E s k s T k (8) n = E n k n T k (8) n = E n k n T k = T n (8) n = E n k n T k = T n (88) n = E n k n T k (89) If we assume the desired signal and the noise are uncorrelated then the correlation matrix uu can be written as uu = E sk = E sk s k s k + nk s T k? s T n k s T k k nk + E s T k {z } ss n k + n T k n T k n T k n T k {z } nn (9) with ss = s s s ; (9) s nn = n n n n = n n T n n : (9) First we will discuss the symmetry properties of the correlation matrix ss and the conditions under which the correlation matrix nn exhibits these symmetry properties Property Because of the specic form of the correlation matrix ss in equation (9), this matrix exhibits the following properties (for notation, see Appendix B) : ss is a symmetric Toeplitz matrix J ss J = ss S ss S = ss Proof : Since s is a symmetric Toeplitz matrix, it is easily veried that ss is Toeplitz and that T T ss = s T s s = s = s ss s T s T s 9

23 Since s is a symmetric Toeplitz matrix, J s J = s and J s J ss J = s J J s s J Js J J = s J s = s = J s J J s J s ss s Since ss is block-toeplitz and block-symmetric, I S ss S = = I s s s s I I s s s = ss s For the noise correlation matrix nn we will discuss the conditions under which nn exhibits symmetry properties Property The noise correlation matrix nn has the property J nn J = nn, i Jn J = n J n J = T n ( n is centro? symmetric) (9) Proof : Trivial by equating the following expressions : n nn = n T n n J J nn J = J n n T n n J J Jn J = J n J J T J n J n J A sucient condition for n being centro-symmetric, is n being Toeplitz For stationary noise sources, n and n are symmetric Toeplitz matrices, such that the condition J n J = n implies that n = n Property The noise correlation matrix nn has the property S nn S = nn, i n = n n = T n ( n is symmetric) (9) Proof : Trivial by equating the following expressions : n nn = n T n n I S nn S = I n n T n n I I = n T n n n

24 For dierent types of noise correlation matrices nn we will now discuss the symmetry properties for the optimal Wiener lter W W F, which can be written as W W F = X?T diagf i? i i g X T (9) W W F = uu? ( uu? nn ) =? uu ss? s + = n s + n s + T n s + n s s s s (9) and the symmetry properties for the more general class of estimators W W F = X?T diagff( i ; i )g X T (9) For convenience, we will partition the matrix W W F into four parts : W W F WW F W W F = W W F W W F : (98) Property Because of the specic form of the optimal Wiener lter W W F in equation (9), this lter exhibits the properties WW F = WW F (99) WW F = WW F () Proof : As can be easily veried? s + W W F = n s + n s + T n s + n = s s s s ( + = ) s ( + ) s ( + ) s ( + ) s s s s s Case : The noise correlation matrix nn has the form n nn = n T ; () n n with n symmetric Toeplitz and n Toeplitz (but not symmetric)

25 Since J ss J = ss and J nn J = nn (see property ), it follows that J uu J = uu, J? uu J =? uu and J? uu nn J =? uu nn From this last property if follows similarly to the proof of theorem that for the optimal lter JW W F J = W W F Therefore the matrix W W F has the form W W W F = W F WW F ; () WW F WW F with J W W F J = W W F : () Similarly to the proof of theorem, one would expect that the general class of estimators, as described in equation (9), exhibits the same symmetry properties as the optimal lter However, not all eigenvectors of the matrix? uu nn are symmetric or skew-symmetric, such that for X?T (of which the columns are the eigenvectors), J X?T = X?T diagfg: () This can be explained because? uu nn has an eigenvalue with multiplicity N ( uu and nn have N eigenvalues which are the same) The eigenspace corresponding to this eigenvalue consists of N eigenvectors which are a linear combination of symmetric and skew-symmetric vectors, and hence, are neither symmetric nor skew-symmetric [] Therefore the general class of estimators exhibits no symmetry properties at all However, if we only retain the N eigenvectors X which are symmetric or skewsymmetric and discard the N eigenvectors X which are neither symmetric nor skewsymmetric, then we can prove the same symmetry properties () and () for the general class of estimators If we assume that the matrix X?T has the form X?T = X X ; () and the diagonal matrix in equation (9) is of the form diagff(i ; i )g = ; () with R N N a diagonal matrix, then the general class of estimators exhibits the same symmetry properties as the optimal lter Case : The noise correlation matrix nn has the form n nn = n ; () n n with n and n symmetric Toeplitz Because this is a special case of equation (), the same symmetry properties hold for the optimal lter JW W F J = W W F Since S ss S = ss and S nn S = nn (see property ), it follows that S uu S = uu,

26 S? uus =? uu and S? uu nn S =? uu nn From this last property if follows similarly to the proof of theorem that for the optimal lter S W W F S = S? I?? uu nn S = I?? uu nn = W W F : (8) Therefore the matrix W W F has the form W W W F = W F WW F WW F WW F with ; (9) This means that the lters for the channels are equal J W W F J = W W F : () For the general class of estimators, as described in equation (9), these symmetry properties don't hold in all cases The reason is the same as for case, ie J X?T = X?T diagfg () with X?T the matrix containing the eigenvectors of? uu nn The property which does hold in all cases is S X?T = X?T diagfg: () Therefore the general class of estimators always satises SW W F S = W W F and has the form W W W F = W F WW F ; () WW F WW F but it only exhibits the additional symmetry properties (9) and () if the diagonal matrix in equation (9) is of the form diagff( i ; i )g = ; () with R N N a diagonal matrix, under the same conditions as case Case : If in case the noise sources n (k) and n (k) are uncorrelated ( n = ), then the noise correlation matrix nn has the form with n symmetric Toeplitz nn = n ; () n The conclusions regarding symmetry properties are the same as in case, for the optimal lter as well as for the general class of estimators

27 Case : If in case the uncorrelated noise sources n (k) and n (k) are white noise sources with the same noise power, then the noise correlation matrix nn has the form nn = I I : () The conclusions regarding symmetry properties are the same as in case, except for the additional property that W W F is symmetric, for the optimal lter as well as for the general class of estimators In this case the optimal lter W W F has the form with 8 Conclusion W W F = W W F W W F W W F W W F ; () WW F = J WW F J (8) WW F = WW T F : (9) In this section we have described a class of SVD-based signal enhancement procedures, which amount to a specic optimal ltering technique for the case where the so-called `desired response' signal d k = s k cannot be observed It is shown that this optimal lter W W F can be written as a function of the generalized singular vectors and singular values of a so-called speech data matrix U k and noise data matrix N k When applying this ltering technique to time series, a number of simple symmetry properties are derived, which prove to be valid for the white noise case as well as for the coloured noise case Also the averaging step of the standard one-microphone SVDbased noise reduction techniques is investigated, leading to serious doubts about the necessity of this averaging step, which increases computational complexity but does not improve performance When applying the SVD-based optimal ltering technique to multiple channels, a number of additional symmetry properties can be derived, depending on the structure of the noise covariance matrix

28 Beamforming behaviour of multichannel ltering In this section we will discuss the frequency and spatial ltering properties of SVDbased estimators, described in section, when applied to multichannel noise reduction in speech signals As already mentioned in section, applying the SVD-based optimal ltering technique to multiple channels can be considered as a multichannel ltering operation for which a beamforming interpretation can be given These beamforming properties will be examined for dierent simulated situations We will consider a desired signal (broadband/smallband) arriving at a microphone array from dierent directions and a diuse or localized noise source We will also examine the performance of this noise reduction technique for a real-world situation In the ideal case, the spatial beamforming pattern should amplify in the direction of the signal source and should attenuate in the direction of the localized noise We will see that the SVD-based optimal ltering technique exhibits such behaviour In the following sections we will further compare the SVD-based optimal ltering technique with standard beamforming algorithms First we will discuss the room conguration, the used speech and noise signals and the SVD-based noise reduction technique in full detail Preliminaries Consider a linear equi-spaced microphone array consisting of M microphones The microphone array is recording both a desired speech signal s(k) and background noise n(k), as depicted in gure The j th microphone signal can be written as m j (k) = s j (k) + n j (k) j = : : : M; () where s j (k) is the speech signal in the j th microphone signal and n j (k) is the noise in the j th microphone signal signal reflection noise source SVD-based direct path signal source multi-microphone signal enhancement microphone array interference noise source Figure : Microphone array recording desired signal and background noise In order to improve the signal-to-noise ratio of the microphone signals m j (k) and hence reduce the background noise, we use the multichannel lter structure, as depicted in gure, which lters and sums the dierent microphone signals The main

29 diculty lies in nding the optimal lters A j For nding these lters we will use the SVD-based optimal ltering technique, described in section e + Σ A A A A m m m m Desired signal Noise Microphone array Figure : Multichannel ltering The clean speech signal s(k) is an 8 khz speech signal ( samples), which is depicted in gure 8 The dotted line indicates the region of speech activity (speech/silence detection) Speech is present in samples [ : : : ] The signal s j (k) is the speech signal in the j th microphone signal, which is the clean speech signal s(k) ltered with the acoustic impulse response of the room In this section we will only consider a pure delay environment without multipath eects, where the speech signals s j (k) are delayed versions of each other If the desired signal impinges on the microphone array at an angle, the delay (number of samples) between two adjacent microphones is = d cos f s ; () c with d the distance between the microphones, c the speed of sound (c ' m s ) and f s the sampling frequency, such that s j+m (k) = s j (k? m): () If = Z, than the dierent speech signals s j (k) can be constructed by ltering the clean speech signal s(k) with an interpolation lter In this section we will consider two kind of noise sources : spatio-temporal white noise (diuse noise) where the noise n j (k) in the j th microphone signal is temporal white noise and is uncorrelated with the noise n l (k) in the l th microphone signal, E (n j (k)n l (k)) = ; j = l () a localized white noise source n(k) which impinges on the microphone array at an angle, such that the noise signals n j (k) are delayed versions of each other (analogous to the speech signal) and are correlated with each other The dierent noise signals n j (k) are constructed by ltering the white noise signal n(k) with an interpolation lter

30 As already indicated in equation (), the j th microphone signal m j (k) is a noisy speech signal, which is the sum of s j (k) and n j (k) Such a signal is depicted in gure 8, where the dotted line indicates the region of speech activity (speech/noise detection) Original signal Amplitude 8 8 x Microphone signal Amplitude 8 Time (samples) 8 x Figure 8: Clean and noisy speech signal Using the signals m j (k) and n j (k) we construct the speech data matrix U k R pmn and the noise data matrix N k R pmn, as dened in equations () and (), where N denotes the length of the lters A j, ( Uk = U k U k : : : U Mk N k = N k N k : : : N Mk ; () with 8 >< >: U jk = N jk = m j (k) m j (k? ) m j (k? ) : : : m j (k? N + ) m j (k + ) m j (k) m j (k? ) : : : m j (k? N + ) m j (k + ) m j (k + ) m j (k) : : : m j (k? N + ) m j (k + p? ) m j (k + p? ) m j (k + p? ) : : : m j (k + p? N) n j (k) n j (k? ) n j (k? ) : : : n j (k? N + ) n j (k + ) n j (k) n j (k? ) : : : n j (k? N + ) n j (k + ) n j (k + ) n j (k) : : : n j (k? N + ) n j (k + p? ) n j (k + p? ) n j (k + p? ) : : : n j (k + p? N) ()

31 For constructing the speech data matrix U k we use samples m j (8 9999) For constructing the noise data matrix N k we use the same frame n j (8 9999) In practice this is never possible, since the noise data matrix can only be constructed during periods where no speech is present However for simulated situations the total noise signal n j (k) is known In section it will be seen that for stationary noise constructing the noise data matrix N k from a dierent frame than the speech data matrix U k has no inuence with regard to the performance Using the generalized singular value decomposition of U k and N k Uk = U diagf i g X T N k = V diagf i g X T ; () we can compute the optimal Wiener lter W W F, W W F = X?T diagf i? i i g X T : (8) By choosing the column of W W F corresponding to the smallest element on the diagonal of the noise correlation matrix N T k N k W W F we obtain the best estimator The lter wmin consists of the M lters of length N, w min W F W F w min W F = A A : : : A M : (9) The resulting estimated (enhanced) signal ^s(k) is computed by ltering and summing the microphone signals m j (k) with the lters A j for their total length ( samples), ^s(k) = MX j= A j (k) m j (k): () For comparison purposes the signal-to-noise ratio (SNR) will be used The SNR of a signal x(k) is dened as SNR(x) =?P x (k) speech ( P x (k)) noise ; () which is the energy of the signal x(k) during speech periods divided by the energy of the signal x(k) during noise periods Therefore a speech/noise detection is necessary as indicated by the dotted line in gure 8 Spatio-temporal white noise In this case the noise n j (k) in the j th microphone signal is temporal white noise and is uncorrelated with the noise n l (k) in the l th microphone signal, E (n j (k)n l (k)) = ; j = l: () 8

32 Broadband source We will discuss the SNR-improvement and the frequency behaviour when the broadband speech source is in front of the microphone array ( = 9 ) and we will discuss the spatial ltering properties (beamforming behaviour) for = 9 and = The distance d between the dierent microphones is cm When the speech source is in front of the microphone array ( = 9 ), the microphone signals m j (k) are m j (k) = s(k) + n j (k); () with n j (k) temporal white noise The noise power is chosen such that the SNR of the rst (noisy) microphone signal m (k) is :8 db We have varied the number of channels M from to and lterlength N from to Figure 9 shows the SNR of the enhanced signal ^s(k) for dierent values of M and N As can be clearly seen, the SNR of the enhanced signal ^s(k) improves when the number of channels M increases and when the lterlength N increases However, from a certain lterlength on (in this specic case N ' 8), the SNR improvements are marginal Figure depicts the noisy speech signals (rst microphone signal m (k)), enhanced signals ^s(k) and the amplitude of the frequency response jh j (f)j for the M lters A j, with H j (f) = NX k= A j (k) exp? j f(k? ) : () f s The number of channels M is and the lterlength N is, and The SNR of the enhanced signals is :9 db (N = ), 9:8 db (N = ) and : db (N = ) In gure the number of channels M is The SNR of the enhanced signals is 8:9 db (N = ), : db (N = ) and : db (N = ) In gure the number of channels M is The SNR of the enhanced signals is : db (N = ), : db (N = ) and : db (N = ) As already indicated in section for uncorrelated white noise sources with the same noise power (case ), theoretically the lters A j for the M dierent channels should be the same This can be veried for the frequency responses in gures, and We will now compare the spatial ltering properties (beamforming behaviour) when the speech source impinges on the microphone array at = 9 and = Ideally the spatial beamforming pattern should amplify in the direction of the desired signal The spatial beamforming pattern H(f; ) is both a function of frequency f and angle and can be calculated as H(f; ) = MX l= H l (f) exp j f (l? )d cos ; () c with H l (f) dened in equation () The number of channels M is and the lterlength N is 9

33 8 White noise Broadband signal No multipath SNR of enhanced signal (db) 8 Filterlength N 8 9 Number of channels M 8 White noise Broadband signal No multipath SNR of enhanced signal (db) 8 Number of channels M 8 8 Filterlength N Figure 9: SNR of enhanced signal ^s(k) for spatio-temporal white noise and speech source in front of microphone array ( = 9 ) Number of channels M varies from to and lterlength N varies from to

34 Corrupted + Enhanced signal Frequency response per channel Corrupted + Enhanced signal Frequency response per channel Corrupted + Enhanced signal Frequency response per channel Figure : Noisy and enhanced signals and frequency response jh j (f)j of the lters A j for spatio-temporal white noise and speech source in front of microphone array ( = 9 ) Number of channels M = and lterlength N = ; ;

35 Corrupted + Enhanced signal Frequency response per channel Corrupted + Enhanced signal Frequency response per channel Corrupted + Enhanced signal Frequency response per channel Figure : Noisy and enhanced signals and frequency response jh j (f)j of the lters A j for spatio-temporal white noise and speech source in front of microphone array ( = 9 ) Number of channels M = and lterlength N = ; ;

36 Corrupted + Enhanced signal Frequency response per channel 8 Corrupted + Enhanced signal Frequency response per channel 8 Corrupted + Enhanced signal Frequency response per channel 8 Figure : Noisy and enhanced signals and frequency response jh j (f)j of the lters A j for spatio-temporal white noise and speech source in front of microphone array ( = 9 ) Number of channels M = and lterlength N = ; ;

37 For = 9, gure depicts the noisy speech signal (rst microphone signal m (k)), enhanced signal ^s(k), the amplitude of the frequency response jh j (f)j for the M lters A j, the amplitude of the spatial beamforming pattern jh(f; )j for all frequencies f and for one specic frequency f = Hz As can be seen from the spatial beamforming pattern for f = Hz, the directivity gain is maximal for the direction = 9 This is even better illustrated in gure where the spatial beamforming pattern is plotted for every frequency f = i ; i = : : : For every frequency the directivity gain is maximal for the direction = 9 However for low frequencies the spatial selectivity is very poor In gure the angle = As can be seen from the spatial beamforming pattern for f = Hz, the directivity gain is maximal for the direction = This is even better illustrated in gure where the spatial beamforming pattern is plotted for every frequency f = i ; i = : : : For most frequencies the directivity gain is maximal for the direction = However for low frequencies the spatial selectivity is very poor Corrupted signal + Enhanced signal Frequency response per channel x Spatial pattern 8 Spatial pattern ( Hz) Figure : Noisy and enhanced signal, frequency response jh j (f)j of the lters A j and spatial beamforming pattern jh(f; )j for spatio-temporal white noise and speech source in front of microphone array ( = 9 ) Number of channels M = and lterlength N =

x 104

x 104 Departement Elektrotechniek ESAT-SISTA/TR 98-3 Identication of the circulant modulated Poisson process: a time domain approach Katrien De Cock, Tony Van Gestel and Bart De Moor 2 April 998 Submitted for

More information

Departement Elektrotechniek ESAT-SISTA/TR About the choice of State Space Basis in Combined. Deterministic-Stochastic Subspace Identication 1

Departement Elektrotechniek ESAT-SISTA/TR About the choice of State Space Basis in Combined. Deterministic-Stochastic Subspace Identication 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 994-24 About the choice of State Space asis in Combined Deterministic-Stochastic Subspace Identication Peter Van Overschee and art

More information

Linear Optimum Filtering: Statement

Linear Optimum Filtering: Statement Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error

More information

Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor y ESAT-SISTA, KU Leuven, Kardinaal Mercierlaan 94, B-3 Leuven (

Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor y ESAT-SISTA, KU Leuven, Kardinaal Mercierlaan 94, B-3 Leuven ( Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 97- Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor Proceedings of the 997 International

More information

The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem B. De Schutter 1 B. De Moor ESAT-SISTA, K.

The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem B. De Schutter 1 B. De Moor ESAT-SISTA, K. Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 9-1 The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem 1 Bart De Schutter

More information

USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo

USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo th European Signal Processing Conference (EUSIPCO 1 Bucharest, Romania, August 7-31, 1 USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS Toby Christian

More information

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR SPEECH CODING Petr Polla & Pavel Sova Czech Technical University of Prague CVUT FEL K, 66 7 Praha 6, Czech Republic E-mail: polla@noel.feld.cvut.cz Abstract

More information

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Elec461 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Dr. D. S. Taubman May 3, 011 In this last chapter of your notes, we are interested in the problem of nding the instantaneous

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

Design of far-field and near-field broadband beamformers using eigenfilters

Design of far-field and near-field broadband beamformers using eigenfilters Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 22-11 Design of far-field and near-field broadband beamformers using eigenfilters Simon Doclo, Marc Moonen 1 Signal Processing,

More information

Departement Elektrotechniek ESAT-SISTA/TR 98- Stochastic System Identication for ATM Network Trac Models: a Time Domain Approach Katrien De Cock and Bart De Moor April 998 Accepted for publication in roceedings

More information

A new fast algorithm for blind MA-system identication. based on higher order cumulants. K.D. Kammeyer and B. Jelonnek

A new fast algorithm for blind MA-system identication. based on higher order cumulants. K.D. Kammeyer and B. Jelonnek SPIE Advanced Signal Proc: Algorithms, Architectures & Implementations V, San Diego, -9 July 99 A new fast algorithm for blind MA-system identication based on higher order cumulants KD Kammeyer and B Jelonnek

More information

ADAPTIVE ANTENNAS. SPATIAL BF

ADAPTIVE ANTENNAS. SPATIAL BF ADAPTIVE ANTENNAS SPATIAL BF 1 1-Spatial reference BF -Spatial reference beamforming may not use of embedded training sequences. Instead, the directions of arrival (DoA) of the impinging waves are used

More information

Departement Elektrotechniek ESAT-SISTA/TR Dynamical System Prediction: a Lie algebraic approach for a novel. neural architecture 1

Departement Elektrotechniek ESAT-SISTA/TR Dynamical System Prediction: a Lie algebraic approach for a novel. neural architecture 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 1995-47 Dynamical System Prediction: a Lie algebraic approach for a novel neural architecture 1 Yves Moreau and Joos Vandewalle

More information

Acoustic MIMO Signal Processing

Acoustic MIMO Signal Processing Yiteng Huang Jacob Benesty Jingdong Chen Acoustic MIMO Signal Processing With 71 Figures Ö Springer Contents 1 Introduction 1 1.1 Acoustic MIMO Signal Processing 1 1.2 Organization of the Book 4 Part I

More information

Relative Irradiance. Wavelength (nm)

Relative Irradiance. Wavelength (nm) Characterization of Scanner Sensitivity Gaurav Sharma H. J. Trussell Electrical & Computer Engineering Dept. North Carolina State University, Raleigh, NC 7695-79 Abstract Color scanners are becoming quite

More information

Convolutive Transfer Function Generalized Sidelobe Canceler

Convolutive Transfer Function Generalized Sidelobe Canceler IEEE TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, VOL. XX, NO. Y, MONTH 8 Convolutive Transfer Function Generalized Sidelobe Canceler Ronen Talmon, Israel Cohen, Senior Member, IEEE, and Sharon

More information

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie WIENER FILTERING Presented by N.Srikanth(Y8104060), M.Manikanta PhaniKumar(Y8104031). INDIAN INSTITUTE OF TECHNOLOGY KANPUR Electrical Engineering dept. INTRODUCTION Noise is present in many situations

More information

Sound Listener s perception

Sound Listener s perception Inversion of Loudspeaker Dynamics by Polynomial LQ Feedforward Control Mikael Sternad, Mathias Johansson and Jonas Rutstrom Abstract- Loudspeakers always introduce linear and nonlinear distortions in a

More information

AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL. Tofigh Naghibi and Beat Pfister

AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL. Tofigh Naghibi and Beat Pfister AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL Tofigh Naghibi and Beat Pfister Speech Processing Group, Computer Engineering and Networks Lab., ETH Zurich, Switzerland {naghibi,pfister}@tik.ee.ethz.ch

More information

A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection

A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection Progress In Electromagnetics Research M, Vol. 35, 163 171, 2014 A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection Basma Eldosouky, Amr H. Hussein *, and Salah Khamis Abstract

More information

A Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong

A Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong A Fast Algorithm for Nonstationary Delay Estimation H. C. So Department of Electronic Engineering, City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong Email : hcso@ee.cityu.edu.hk June 19,

More information

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.

More information

Z subarray. (d,0) (Nd-d,0) (Nd,0) X subarray Y subarray

Z subarray. (d,0) (Nd-d,0) (Nd,0) X subarray Y subarray A Fast Algorithm for 2-D Direction-of-Arrival Estimation Yuntao Wu 1,Guisheng Liao 1 and H. C. So 2 1 Laboratory for Radar Signal Processing, Xidian University, Xian, China 2 Department of Computer Engineering

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of

Institute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations

More information

SEPARATION OF ACOUSTIC SIGNALS USING SELF-ORGANIZING NEURAL NETWORKS. Temujin Gautama & Marc M. Van Hulle

SEPARATION OF ACOUSTIC SIGNALS USING SELF-ORGANIZING NEURAL NETWORKS. Temujin Gautama & Marc M. Van Hulle SEPARATION OF ACOUSTIC SIGNALS USING SELF-ORGANIZING NEURAL NETWORKS Temujin Gautama & Marc M. Van Hulle K.U.Leuven, Laboratorium voor Neuro- en Psychofysiologie Campus Gasthuisberg, Herestraat 49, B-3000

More information

Enhancement of Noisy Speech. State-of-the-Art and Perspectives

Enhancement of Noisy Speech. State-of-the-Art and Perspectives Enhancement of Noisy Speech State-of-the-Art and Perspectives Rainer Martin Institute of Communications Technology (IFN) Technical University of Braunschweig July, 2003 Applications of Noise Reduction

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

Beamforming Techniques Applied in EEG Source Analysis

Beamforming Techniques Applied in EEG Source Analysis Beamforming Techniques Applied in EEG Source Analysis G. Van Hoey 1,, R. Van de Walle 1, B. Vanrumste 1,, M. D Havé,I.Lemahieu 1 and P. Boon 1 Department of Electronics and Information Systems, University

More information

Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation. Bhaskar D Rao University of California, San Diego

Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation. Bhaskar D Rao University of California, San Diego Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books 1. Optimum Array Processing, H. L. Van Trees 2.

More information

(a)

(a) Chapter 8 Subspace Methods 8. Introduction Principal Component Analysis (PCA) is applied to the analysis of time series data. In this context we discuss measures of complexity and subspace methods for

More information

Chapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析

Chapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 Chapter 9 Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 1 LPC Methods LPC methods are the most widely used in speech coding, speech synthesis, speech recognition, speaker recognition and verification

More information

BLIND SOURCE EXTRACTION FOR A COMBINED FIXED AND WIRELESS SENSOR NETWORK

BLIND SOURCE EXTRACTION FOR A COMBINED FIXED AND WIRELESS SENSOR NETWORK 2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 BLIND SOURCE EXTRACTION FOR A COMBINED FIXED AND WIRELESS SENSOR NETWORK Brian Bloemendal Jakob van de Laar

More information

On max-algebraic models for transportation networks

On max-algebraic models for transportation networks K.U.Leuven Department of Electrical Engineering (ESAT) SISTA Technical report 98-00 On max-algebraic models for transportation networks R. de Vries, B. De Schutter, and B. De Moor If you want to cite this

More information

Structured weighted low rank approximation 1

Structured weighted low rank approximation 1 Departement Elektrotechniek ESAT-SISTA/TR 03-04 Structured weighted low rank approximation 1 Mieke Schuermans, Philippe Lemmerling and Sabine Van Huffel 2 January 2003 Accepted for publication in Numerical

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego Spatial Smoothing and Broadband Beamforming Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books and Papers 1. Optimum Array Processing, H. L. Van Trees 2. Stoica, P.,

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University EDICS category SP 1 A Subspace Approach to Estimation of Autoregressive Parameters From Noisy Measurements 1 Carlos E Davila Electrical Engineering Department, Southern Methodist University Dallas, Texas

More information

TinySR. Peter Schmidt-Nielsen. August 27, 2014

TinySR. Peter Schmidt-Nielsen. August 27, 2014 TinySR Peter Schmidt-Nielsen August 27, 2014 Abstract TinySR is a light weight real-time small vocabulary speech recognizer written entirely in portable C. The library fits in a single file (plus header),

More information

SPEECH ANALYSIS AND SYNTHESIS

SPEECH ANALYSIS AND SYNTHESIS 16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Time-domain representations

Time-domain representations Time-domain representations Speech Processing Tom Bäckström Aalto University Fall 2016 Basics of Signal Processing in the Time-domain Time-domain signals Before we can describe speech signals or modelling

More information

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M: Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl

More information

Comparison of DDE and ETDGE for. Time-Varying Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong

Comparison of DDE and ETDGE for. Time-Varying Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong Comparison of DDE and ETDGE for Time-Varying Delay Estimation H. C. So Department of Electronic Engineering, City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong Email : hcso@ee.cityu.edu.hk

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego Direction of Arrival Estimation: Subspace Methods Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu Reference Books and Papers 1 Optimum Array Processing, H L Van Trees 2 Stoica, P,

More information

Microphone-Array Signal Processing

Microphone-Array Signal Processing Microphone-Array Signal Processing, c Apolinárioi & Campos p. 1/30 Microphone-Array Signal Processing José A. Apolinário Jr. and Marcello L. R. de Campos {apolin},{mcampos}@ieee.org IME Lab. Processamento

More information

Block-row Hankel Weighted Low Rank Approximation 1

Block-row Hankel Weighted Low Rank Approximation 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 03-105 Block-row Hankel Weighted Low Rank Approximation 1 Mieke Schuermans, Philippe Lemmerling and Sabine Van Huffel 2 July 2003

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D Jülich, Tel. (02461)

FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D Jülich, Tel. (02461) FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D-52425 Jülich, Tel. (2461) 61-642 Interner Bericht Temporal and Spatial Prewhitening of Multi-Channel MEG Data Roland Beucker, Heidi

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control

Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control APSIPA ASC 20 Xi an Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control Iman Tabatabaei Ardekani, Waleed H. Abdulla The University of Auckland, Private Bag 9209, Auckland, New

More information

Acoustic Source Separation with Microphone Arrays CCNY

Acoustic Source Separation with Microphone Arrays CCNY Acoustic Source Separation with Microphone Arrays Lucas C. Parra Biomedical Engineering Department City College of New York CCNY Craig Fancourt Clay Spence Chris Alvino Montreal Workshop, Nov 6, 2004 Blind

More information

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi Signal Modeling Techniques in Speech Recognition Hassan A. Kingravi Outline Introduction Spectral Shaping Spectral Analysis Parameter Transforms Statistical Modeling Discussion Conclusions 1: Introduction

More information

Matrix factorization and minimal state space realization in the max-plus algebra

Matrix factorization and minimal state space realization in the max-plus algebra KULeuven Department of Electrical Engineering (ESAT) SISTA Technical report 96-69 Matrix factorization and minimal state space realization in the max-plus algebra B De Schutter and B De Moor If you want

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

ENGINEERING TRIPOS PART IIB: Technical Milestone Report

ENGINEERING TRIPOS PART IIB: Technical Milestone Report ENGINEERING TRIPOS PART IIB: Technical Milestone Report Statistical enhancement of multichannel audio from transcription turntables Yinhong Liu Supervisor: Prof. Simon Godsill 1 Abstract This milestone

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis November 24, 2015 From data to operators Given is data set X consisting of N vectors x n R D. Without loss of generality, assume x n = 0 (subtract mean). Let P be D N matrix

More information

Source localization and separation for binaural hearing aids

Source localization and separation for binaural hearing aids Source localization and separation for binaural hearing aids Mehdi Zohourian, Gerald Enzner, Rainer Martin Listen Workshop, July 218 Institute of Communication Acoustics Outline 1 Introduction 2 Binaural

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Filter Banks with Variable System Delay. Georgia Institute of Technology. Atlanta, GA Abstract

Filter Banks with Variable System Delay. Georgia Institute of Technology. Atlanta, GA Abstract A General Formulation for Modulated Perfect Reconstruction Filter Banks with Variable System Delay Gerald Schuller and Mark J T Smith Digital Signal Processing Laboratory School of Electrical Engineering

More information

Katholieke Universiteit Leuven

Katholieke Universiteit Leuven Katholieke Universiteit Leuven Departement Elektrotechniek ESA-SISA/R 24-167 An instrumental variable method for adaptive feedback cancellation in hearing aids 1 Ann Spriet 2, Ian Proudler 3,Marc Moonen

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation

Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation Co-prime Arrays with Reduced Sensors (CARS) for Direction-of-Arrival Estimation Mingyang Chen 1,LuGan and Wenwu Wang 1 1 Department of Electrical and Electronic Engineering, University of Surrey, U.K.

More information

Tutorial on Blind Source Separation and Independent Component Analysis

Tutorial on Blind Source Separation and Independent Component Analysis Tutorial on Blind Source Separation and Independent Component Analysis Lucas Parra Adaptive Image & Signal Processing Group Sarnoff Corporation February 09, 2002 Linear Mixtures... problem statement...

More information

BLOCK-BASED MULTICHANNEL TRANSFORM-DOMAIN ADAPTIVE FILTERING

BLOCK-BASED MULTICHANNEL TRANSFORM-DOMAIN ADAPTIVE FILTERING BLOCK-BASED MULTICHANNEL TRANSFORM-DOMAIN ADAPTIVE FILTERING Sascha Spors, Herbert Buchner, and Karim Helwani Deutsche Telekom Laboratories, Technische Universität Berlin, Ernst-Reuter-Platz 7, 10587 Berlin,

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables, University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

More information

926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH Monica Nicoli, Member, IEEE, and Umberto Spagnolini, Senior Member, IEEE (1)

926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH Monica Nicoli, Member, IEEE, and Umberto Spagnolini, Senior Member, IEEE (1) 926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH 2005 Reduced-Rank Channel Estimation for Time-Slotted Mobile Communication Systems Monica Nicoli, Member, IEEE, and Umberto Spagnolini,

More information

HST.582J/6.555J/16.456J

HST.582J/6.555J/16.456J Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear

More information

EIGENFILTERS FOR SIGNAL CANCELLATION. Sunil Bharitkar and Chris Kyriakakis

EIGENFILTERS FOR SIGNAL CANCELLATION. Sunil Bharitkar and Chris Kyriakakis EIGENFILTERS FOR SIGNAL CANCELLATION Sunil Bharitkar and Chris Kyriakakis Immersive Audio Laboratory University of Southern California Los Angeles. CA 9. USA Phone:+1-13-7- Fax:+1-13-7-51, Email:ckyriak@imsc.edu.edu,bharitka@sipi.usc.edu

More information

Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc

Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc Aalborg Universitet Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc Published in: I E E E International Conference

More information

c 21 w 2 c 22 s 2 c 12

c 21 w 2 c 22 s 2 c 12 Blind Adaptive Cross-Pole Interference Cancellation Using Fractionally-Spaced CMA Wonzoo Chung, John Treichler y, and C. Richard Johnson, Jr. wonzoo(johnson)@ee.cornell.edu y jrt@appsig.com School of Elec.

More information

"Robust Automatic Speech Recognition through on-line Semi Blind Source Extraction"

Robust Automatic Speech Recognition through on-line Semi Blind Source Extraction "Robust Automatic Speech Recognition through on-line Semi Blind Source Extraction" Francesco Nesta, Marco Matassoni {nesta, matassoni}@fbk.eu Fondazione Bruno Kessler-Irst, Trento (ITALY) For contacts:

More information

APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS

APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS AMBISONICS SYMPOSIUM 29 June 2-27, Graz APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS Anton Schlesinger 1, Marinus M. Boone 2 1 University of Technology Delft, The Netherlands (a.schlesinger@tudelft.nl)

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Elec4621: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering

Elec4621: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering Elec462: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering Dr D S Taubman May 2, 2 Wiener Filtering A Wiener lter is one which provides the Minimum Mean Squared Error (MMSE) prediction,

More information

3.2 Complex Sinusoids and Frequency Response of LTI Systems

3.2 Complex Sinusoids and Frequency Response of LTI Systems 3. Introduction. A signal can be represented as a weighted superposition of complex sinusoids. x(t) or x[n]. LTI system: LTI System Output = A weighted superposition of the system response to each complex

More information

No. of dimensions 1. No. of centers

No. of dimensions 1. No. of centers Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblock-circulant matrices......

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Seema Sud 1 1 The Aerospace Corporation, 4851 Stonecroft Blvd. Chantilly, VA 20151 Abstract

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North

More information

Adaptive Systems Homework Assignment 1

Adaptive Systems Homework Assignment 1 Signal Processing and Speech Communication Lab. Graz University of Technology Adaptive Systems Homework Assignment 1 Name(s) Matr.No(s). The analytical part of your homework (your calculation sheets) as

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

Introduction Reduced-rank ltering and estimation have been proposed for numerous signal processing applications such as array processing, radar, model

Introduction Reduced-rank ltering and estimation have been proposed for numerous signal processing applications such as array processing, radar, model Performance of Reduced-Rank Linear Interference Suppression Michael L. Honig and Weimin Xiao Dept. of Electrical & Computer Engineering Northwestern University Evanston, IL 6008 January 3, 00 Abstract

More information

Covariance smoothing and consistent Wiener filtering for artifact reduction in audio source separation

Covariance smoothing and consistent Wiener filtering for artifact reduction in audio source separation Covariance smoothing and consistent Wiener filtering for artifact reduction in audio source separation Emmanuel Vincent METISS Team Inria Rennes - Bretagne Atlantique E. Vincent (Inria) Artifact reduction

More information

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820 Joint Optimum Bitwise Decomposition of any Memoryless Source to be Sent over a BSC Seyed Bahram Zahir Azami, Pierre Duhamel 2 and Olivier Rioul 3 cole Nationale Superieure des Telecommunications URA CNRS

More information

Error correcting least-squares Subspace algorithm for blind identication and equalization

Error correcting least-squares Subspace algorithm for blind identication and equalization Signal Processing 81 (2001) 2069 2087 www.elsevier.com/locate/sigpro Error correcting least-squares Subspace algorithm for blind identication and equalization Balaji Sampath, K.J. Ray Liu, Y. Goerey Li

More information

MMSE Equalizer Design

MMSE Equalizer Design MMSE Equalizer Design Phil Schniter March 6, 2008 [k] a[m] P a [k] g[k] m[k] h[k] + ṽ[k] q[k] y [k] P y[m] For a trivial channel (i.e., h[k] = δ[k]), e kno that the use of square-root raisedcosine (SRRC)

More information

ECE534, Spring 2018: Solutions for Problem Set #5

ECE534, Spring 2018: Solutions for Problem Set #5 ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described

More information

DOA Estimation using MUSIC and Root MUSIC Methods

DOA Estimation using MUSIC and Root MUSIC Methods DOA Estimation using MUSIC and Root MUSIC Methods EE602 Statistical signal Processing 4/13/2009 Presented By: Chhavipreet Singh(Y515) Siddharth Sahoo(Y5827447) 2 Table of Contents 1 Introduction... 3 2

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

Master-Slave Synchronization using. Dynamic Output Feedback. Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium

Master-Slave Synchronization using. Dynamic Output Feedback. Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium Master-Slave Synchronization using Dynamic Output Feedback J.A.K. Suykens 1, P.F. Curran and L.O. Chua 1 Katholieke Universiteit Leuven, Dept. of Electr. Eng., ESAT-SISTA Kardinaal Mercierlaan 94, B-1

More information

Speech Signal Representations

Speech Signal Representations Speech Signal Representations Berlin Chen 2003 References: 1. X. Huang et. al., Spoken Language Processing, Chapters 5, 6 2. J. R. Deller et. al., Discrete-Time Processing of Speech Signals, Chapters 4-6

More information