Assesment of the efficiency of the LMS algorithm based on spectral information
|
|
- Jeffry Jennings
- 6 years ago
- Views:
Transcription
1 Assesment of the efficiency of the algorithm based on spectral information (Invited Paper) Aaron Flores and Bernard Widrow ISL, Department of Electrical Engineering, Stanford University, Stanford CA, USA Abstract The algorithm is used to find the optimal minimum mean-squared error (SE) solutions for a wide variety of problems. Unfortunately, its convergence speed depends heavily on its initial conditions when the autocorrelation matrix R of its input vector has a high eigenvalue spread. In many applications such as system identification or channel equalization, R is Toeplitz. In this paper we exploit the Toeplitz structure of R to show that when the weight vector is initialized to zero, the convergence speed of is related to the similarity between the input PSD and the power spectrum of the optimum solution. I. INTRODUCTION The algorithm is widely used to find the minimum mean-squared error (SE) solutions for a variety of linear estimation problems, and its efficiency has been extensively studied during the last decades [] [8]. The convergence speed of depends on two factors: the eigenvalue distribution of the input autocorrelation matrix R, and the chosen initial condition. odes corresponding to small eigenvalues converge much more slowly than those corresponding to large eigenvalues, and the initialization of the algorithm determines how much excitation is received by each of these modes. In practice, it is not known how the chosen initial conditions excite each of the modes, making the speed of convergence difficult to predict. Our analysis of the convergence speed is concerned with applications where the input vector comes from a tapped delay line, inducing a toeplitz structure in the R matrix. Using the toeplitz nature of R, we show that the speed of convergence of the algorithm can be estimated from its input power spectral density and the spectrum of the difference between the initial weight vector and the optimum solution. The speed of convergence of was qualitatively assessed by comparing it to its ideal counterpart, the /Newton algorithm, which is often used as a benchmark for adaptive algorithms [7], [8]. In section II we describe both the and /Newton algorithms and derive approximations to their learning curves on which our transient analysis is based. In section III we define the performance metric used to evaluate speed of convergence, and in section IV we show how that metric can be estimated from spectral information. We illustrate our result with simulations in section V and summarize our conclusions and future work in Section VI. II. THE AND /NEWTON ALGORITHS Adaptive algorithms such as and /Newton are used to solve linear estimation problems like the one depicted in Fig., where the input vector = [ x k x Lk ] T and desired response Rare jointly stationary random processes, w k = [w k w k w Lk ] T is the weight vector, = x T k w k is the output, and ɛ k = is the error. The ean Square Error (SE) is defined as ξ k = E[ɛ k ] and it is a quadratic function of the weight vector. The optimal weight vector that minimizes ξ k is given by w = R p,where R = E[ x T k ] is the input autocorrelation matrix (assumed to be full rank), and p = E[ ] is the crosscorrelation vector. The minimum SE (SE) obtained using w is denoted by ξ. input signal vector { x k x Lk. Fig.. w k w k w Lk. ɛ k error Adaptive Linear Combiner desired response output A. The algorithm Often in practice R p cannot be calculated due to the lack of knowledge of the statistics R and p. However,when samples of and are available, they can be used to iteratively adjust the weight vector to obtain an approximation of w. The simplest and most widely used algorithm for this is []. It performs instantaneous gradient descent adaptation of the weight vector: w k = w k µɛ k. () The step size parameter is µ and the initial weight vector w is arbitrarily set by the user. The SE sequence ξ k corresponding //$. IEEE
2 to the sequence of adapted weight vectors w k is commonly known as the learning curve. Next we derive an approximation for ξ k that is the basis of our speed of convergence analysis. Since R is symmetric, we can diagonalize it as R = Q T ΛQ, where Λ is a diagonal matrix of the eigenvalues λ i of R, andq is a real orthonormal matrix with the corresponding eigenvectors of R as columns. Let v k = wk w be the weight vector deviation from the optimal solution and v k = Q T v k.letf k be the vector obtained from the diagonal of E[v k v kt ],and λ =[λ λ...λ L ] T. Assuming the random processes {, } to be independent, it can be shown [3] that the SE at time k can be expressed as ξ k = ξ λ T F k. () If we further assume to be gaussian, it was shown in [5] and [3] that F k obeys the following recursion F k =(I µλ 8µ Λ µ λλ T )F k µ ξ λ, (3) where F k is shown [5] to converge if µ < 3Tr(R). This condition in µ allows us to approximate (3) by F k (I µλ)f k µ ξ λ. () Denoting by R L a vector of ones and using (), F k (I µλ) k (F µξ )µξ, (5) Substituting (5) in (), we obtain the following approximation for the learning curve ξ k ξ λ T (I µλ) k (F µξ ). (6) where ξ = limk ξ k ξ ( µ Tr (R)). B. The /Newton algorithm The /Newton algorithm [6] is an ideal variant of the algorithm that uses R to whiten its input. Although most of the time it cannot be implemented in practice due to the lack of knowledge of R, it is of theoretical importance as a benchmark for adaptive algorithms [7], [8]. The /Newton algorithm is the following w k = w k µλ avg R ɛ k (7) P N where λ avg = i= λi n. It is well known [6] that the /Newton algorithm is equivalent to the algorithm with learning rate µλ avg and previously whitened and normalized such that Λ=I. Therefore, assuming the same independence and gaussian statistics of and as with, we can replace µ by µλ avg and Λ by I in (6) to obtain the following approximation for the /Newton learning curve ξ k ξ ( µλ avg ) k λ T (F µξ ) (8) where the asymptotic SE ξ is the same as for. III. TRANSIENT EFFICIENCY Examining (6) and (8) we can see that the learning curve for is a sum of geometric sequences (modes) with µλ i as geometric ratios, whereas for /Newton it consists of a single geometric sequence with geometric ratio µλ avg.if all the eigenvalues of R are equal, the learning curves for and /Newton are the same. Generally, the eigenvalues of R are not equal, and consists of multiple modes, some faster than /Newton and some slower than it as depicted in Fig. ean Square Error (SE) ξ 8 6 (fast mode) /Newton (slow mode) (average) Fig.. and /Newton modes To evaluate the speed of convergence of a learning curve we use the natural measure suggested in [5], J = ξ k ξ. (9) k= Small values of J indicate fast convergence and large values indicate slow convergence. Using (6) and (8) to obtain J for and /Newton respectively we get J T F µlξ µ = vt v µ Lξ () J /Newton λt F µλ avg Lξ = vt Rv Lξ µλ avg µλ avg () To compare the speed of convergence of to the one of /Newton, we define what we call the Transient Efficiency Tansient Efficiency = J /Newton J () If the Transient Efficiency is bigger/smaller than one, then performs better/worse than /Newton in the sense of the J metric. Assuming that the initial weight vector w is far enough from w such that v T v µlξ and vt Rv µλ avg µlξ, we obtain from (), () and () Transient Efficiency v T Rv λ avg v T v (3)
3 Since (3) is invariant to scaling of v and R, we can assume without loss of generality v = λ avg =, obtaining the following compact expression: Transient Efficiency v T Rv () This is the first contribution in this paper and it will be used to obtain an approximation of the Transient Efficiency in terms of spectra in the next section. IV. TRANSIENT EFFICIENCY IN TERS OF SPECTRAL INFORATION In this section we show that in the case that R is toeplitz, the Transient Efficiency can be expressed in terms of the Fourier spectrum of v and the input power spectral density. The toeplitz structure of R arises from applications where the input vector comes from a Tapped Delay Line (TDL) as showninfig.3,i.e. =[... L ] T, where is a stationary scalar random process with autocorrelation sequence φ xx [n] = E[ n ]. The input autocorrelation matrix is therefore given by R k,l = φ xx [k l]. z z... w w w L Fig L error Tapped Delay Line desired response output We define the Input Power Spectral Density vector Φ xx R as the uniformly spaced samples of the DTFT of the N- point truncation of the input autocorrelation function φ xx [n], i. e. Φ xx [m] = N n=n mn πj φ xx [n]e (5) m =,,..., N L, N L. In order to show how Φ xx R is related to the Transient Efficiency, the following Lemma will be needed Lemma (Spectral Factorization of R): Let Ψ be a diagonal matrix with Ψ m,m = Φ xx [m] as its diagonal entries, and let U be a L matrix with elements U l,m = ml πj e with l L and m, thenr can be factored in the following way R = UΨU (6) Proof: Let T = UΨU,then T k,l = m= = = = = e πj km ml πj Ψm,m e (7) m= m(kl) πj Φ xx [m]e (8) N m= n=n N n=n N n=n φ xx [n] φ xx [n]e m= mn πj e πj m(kl) (9) m(kln) πj e () φ xx [n]δ[k l n] () = φ xx [k l] =R k,l () where the step from () to () was done using the following inequalities: L k l L, N n N, and N L. Let V = U v be the -point DFT of v,i.e. V [m] = L mn πj v [n]e m =,,..., n= Using Lemma and (3) we obtain v T o Rv = v T o UΨU v = m= (3) Φ xx [m] V o [m], () furthermore, inserting () into (), we arrive to the following approximation for the Transient Eficciency in terms of the input psd and the spectrum of v o Transient Efficiency m= Φ xx [m] V [m] (5) We should note that, although the independence assumption under which the approximation in () was derived is violated when a tapped delay line is used, it has been observed to hold in simulations. Since v and R are scaled such that v = λ avg =, we can show that m= Φ xx[m] =, m= V [m] = (6) These normalizations on Φ xx [m] and V [m] allow us to interpret the right hand side of (5) as a weighted average of Φ xx [m] where the weights are given by V [m]. Therefore, if Φ xx [m] is large in the frequency ranges where V [m] is also large, the Transient Efficiency will likely be bigger than one, implying that will outperform /Newton under the J metric. On the other hand, if Φ xx [m] tends to be small in the frequency ranges where V [m] is large, the Transient Efficiency will likely be less than one indicating that will perform worse than /Newton in the J metric sense.
4 An important application of (5) is to the special case when the weight vector is initialized to zero w o =, hence v o = w, which results in V [m] = W [m] (7) where W [m] is the -point Discrete Fourier Transform of the optimal or Wiener solution w as defined in (3). Therefore, when the weight vector is initialized to zero, the speed of convergence of the algorithm can be estimated by the weighted average of the input psd with the weights given by the spectrum of the Wiener solution. This observation can be very useful for system identification applications, where prior approximatnowledge of the input psd and frequency response of the plant to be identifiedmaybeusedtoestimate s performance before implementing it. For example, if the plant has a low-pass frequency response and the input psd has very little power in the frequency range above the cut-off frequency, will converge very fast. On the other hand, if the plant has a high-pass nature and the input psd is small for high frequencies, will be very slow. In channel equalization applications the spectrum of the optimum solution is the opposite of the channel frequency response; hence, if we initiliaze the weight vector to zero, (5) will be small implying a slow convergence of, a phenomenom often observed when using in equalization or adaptive inverse control problems. We illustrate these ideas with two examples in the next section. V. SIULATIONS Consider the system identification problem depicted in Fig.. The additive plant noise is independent of the input. Both, the adaptive filter and the plant consists of 6 taps, so the spectrum of the optimum solution coincides with the plant frequency response. The weight vector is initialized to zero and adapted for 8 iterations. The learning rate is set so that µ Tr (R) =.5. The power spectrum of the input and the frequency response of the plant to be identified are shown in Fig. 5, where a value of = was used to calculate Φ xx [m] and W [m]. The eigenvalue spread for this exercise was 579. By inspection of the input psd and the frequency response of the plant we expect the Transient Efficiency to be bigger than one, indicating that should perform better than /Newton, which is confirmed by their learning curves shown in Fig. 6. The estimate for the Transient efficiency obtained from the simulation was.6, very close to the approximation of.69 given by (5). To illustrate what happens if we use zero initial conditions when the input psd and the Wiener solution spectrum are very distinct, consider the simple equalization problem depicted in Fig. 7. The input to the channel is a white signal, hence the input psd for the adaptive filter is given by the magnitude squared of the channel frequency response. The channel frequency response and spectrum of the wiener solution for the adaptive equalizer are shown in Fig. 8 where a value of = was used. The eigenvalue spread of the input to the adaptive filter was 75.. As intuitively expected, the input W [m] Input Φxx[m] 3 Fig.. FIR Filter Unknown Adaptive Filter n k Noise Error System Identification Example Output Fig. 5. ean Square Error SE Eigenspread = 579 Input PSD Channel Response Fig. 6. Input Power Spectrum and Response /Newton Learning curves for System Identification example 3
5 psd and Wiener spectrum have opposite shapes resulting in a small Transient Efficiency,indicating that should perform worse than /Newton, which was corroborated by their learning curves shown in Fig. 9. The estimate for the Transient efficiency obtained from the simulation was.3, very close to the approximation of.37 given by (5). plant Input W [m] Φxx[m] 3 Channel Fig. 7. Adaptive Equalizer Error Equalization Example Fig. 8. ean Square Error SE Eigenspread = 75. Input PSD Wiener Solution Spectrum Output response of plant and optimum equalizer SE Fig. 9. /Newton Learning curves for equalization example VI. CONCLUSIONS AND FUTURE WORK We have shown that when the algorithm is used to train a transversal adaptive filter with jointly stationary input and desired response signals, its transient performance can be assesed by the inner product between the input psd and the spectrum of the initial weight vector deviation from the Wiener solution. This implies that when the initial weight vector is set to zero, the transient efficiency of the algorithm can be predicted from approximatnowledge of the input psd and the frequency response of the Wiener solution. We described how this theory can be applied to system identification and equalization tasks; however, our results can be useful in predicting the performance in any application where the input autocorrelation matrix is toeplitz and spectral information about the input and optimum solution is available. An extension of this analysis is being made to nonstationary conditions, where the spectral characteristics of the input and optimum solutions keep a general shape, say both being always low pass. A similar analysis based on the ean Square Deviation (SD) instead of the SE is also currently investigated. REFERENCES [] B. Widrow and. E. Hoff, Jr, Adaptive switching circuits, in IRE WESCON Con. Rec., pt., 96, pp. 96. [] B. Widrow, J.. ccool,. G. Larimore, and J. C. R. Johnson, Stationary and nonstationary learning characteristics of the adaptive filter., Proceedings of the IEEE, vol. 6, no. 8, pp. 5 6, Aug [3] L. L. Horowitz and K. D. Senne, Performance advantage of complex for controlling narrow-band adaptive arrays, IEEE Trans. Acoust. Speech Signal Process., vol. ASSP-9, no. 3, pp , June 98. [] B. Widrow and E. Walach, On the statistical efficiency of the lms algorithm with nonstationary inputs., IEEE Transactions on Information Theory, vol. 3, no., pp., 98. [5] A. Feuer and E. Weinstein, Convergence analysis of filters with uncorrelated Gaussian data, IEEE Trans. Acoust. Speech Signal Process., vol. ASSP-33, no., pp. 9, Feb [6] B. Widrow and S. D. Stearns, Adaptive Signal Processing, Prentice Hall, Upper Saddle River, NJ, 985. [7] F.. Boland and J. B. Foley, Stochastic convergence of the lms algorithm in adaptive systems., Signal Processing, vol. 3, no., pp , DEC 987. [8] B. F. Boroujeny, On statistical efficiency of the lms algorithm in system modeling., IEEE Transactions on Signal Processing, vol., no. 5, pp. 97 5, AY 993. [9] D. R. organ, Slow asymptotic convergence of lms acoustic echo cancelers., IEEE Transactions on Speech and Audio Processing, vol. 3, no., pp. 6 36, AR 995. [] S. C. Douglas and W.. Pan, Exact expectation analysis of the lms adaptive filter., IEEE Transactions on Signal Processing, vol. 3, no., pp , DEC 995. [] B. Hassibi, A. H. Sayed, and T. Kailath, h optimality of the lms algorithm., IEEE Transactions on Signal Processing, vol., no., pp. 67 8, FEB 996. []. Reuter and J. R. Zeidler, Nonlinear effects in lms adaptive equalizers., IEEE Transactions on Signal Processing, vol. 7, no. 6, pp. 57 9, JUN 999. [3] K. J. Quirk, L. B. ilstein, and J. R. Zeidler, A performance bound for the lms estimator., IEEE Transactions on Information Theory, vol. 6, no. 3, pp. 5 8, AY. [] N. R. Yousef and A. H. Sayed, A unified approach to the steady-state and tracking analyses of adaptive filters., IEEE Transactions on Signal Processing, vol. 9, no., pp. 3, FEB. [5] H. J. Butterweck, A wave theory of long adaptive filters., IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 8, no. 6, pp , JUN. [6] O. Dabeer and E. asry, Analysis of mean-square error and transient speed of the adaptive algorithm, IEEE Trans. Inf. Theory, vol. 8, no. 7, pp , July. [7] B. Widrow and. Kamenetsky, Statistical efficiency of adaptive algorithms., Neural Networks, vol. 6, no. 5/6, pp. 735, JUN- JUL 3. [8] S. Haykin and B. Widrow, Least-mean-square adaptive filters, Wiley- Interscience, Hoboken, N.J., 3.
IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?
IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl
More informationAdaptive Filtering Part II
Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,
More informationSamira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon
Echo Cancelation Using Least Mean Square (LMS) Algorithm Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon Abstract The aim of this work is to investigate methods for
More informationPerformance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112
Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS
More informationTitle without the persistently exciting c. works must be obtained from the IEE
Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface
More informationAdvanced Digital Signal Processing -Introduction
Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary
More informationOn the Stability of the Least-Mean Fourth (LMF) Algorithm
XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We
More informationSubmitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co
Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University
More informationA Strict Stability Limit for Adaptive Gradient Type Algorithms
c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms
More informationChapter 2 Wiener Filtering
Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start
More informationNew Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 135 New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks Martin Bouchard,
More informationAdaptive Filter Theory
0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent
More informationA Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases
A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases Phil Schniter Nov. 0, 001 Abstract In this report we combine the approach of Yousef and Sayed [1] with that of Rupp and Sayed
More information26. Filtering. ECE 830, Spring 2014
26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem
More informationTransient Analysis of Data-Normalized Adaptive Filters
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 51, NO 3, MARCH 2003 639 Transient Analysis of Data-Normalized Adaptive Filters Tareq Y Al-Naffouri and Ali H Sayed, Fellow, IEEE Abstract This paper develops
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationADAPTIVE signal processing algorithms (ASPA s) are
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 12, DECEMBER 1998 3315 Locally Optimum Adaptive Signal Processing Algorithms George V. Moustakides Abstract We propose a new analytic method for comparing
More informationNSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters
NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters R. K. Martin and C. R. Johnson, Jr. School of Electrical Engineering Cornell University Ithaca, NY 14853 {frodo,johnson}@ece.cornell.edu
More informationSNR lidar signal improovement by adaptive tecniques
SNR lidar signal improovement by adaptive tecniques Aimè Lay-Euaille 1, Antonio V. Scarano Dipartimento di Ingegneria dell Innovazione, Univ. Degli Studi di Lecce via Arnesano, Lecce 1 aime.lay.euaille@unile.it
More informationAdaptive Inverse Control
TA1-8:30 Adaptive nverse Control Bernard Widrow Michel Bilello Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract A plant can track an input command signal if it
More informationPerformance Analysis and Enhancements of Adaptive Algorithms and Their Applications
Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications SHENGKUI ZHAO School of Computer Engineering A thesis submitted to the Nanyang Technological University in partial fulfillment
More informationEFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander
EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North
More informationAdaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.
Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is
More informationBinary Step Size Variations of LMS and NLMS
IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume, Issue 4 (May. Jun. 013), PP 07-13 e-issn: 319 400, p-issn No. : 319 4197 Binary Step Size Variations of LMS and NLMS C Mohan Rao 1, Dr. B
More informationNOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group
NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll
More informationACTIVE noise control (ANC) ([1], [2]) is an established
286 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005 Convergence Analysis of a Complex LMS Algorithm With Tonal Reference Signals Mrityunjoy Chakraborty, Senior Member, IEEE,
More informationTemporal Backpropagation for FIR Neural Networks
Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static
More informationCh5: Least Mean-Square Adaptive Filtering
Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm
More informationOn the Use of A Priori Knowledge in Adaptive Inverse Control
54 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 47, NO 1, JANUARY 2000 On the Use of A Priori Knowledge in Adaptive Inverse Control August Kaelin, Member,
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department
More informationA Unified Approach to the Steady-State and Tracking Analyses of Adaptive Filters
314 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 2, FEBRUARY 2001 A Unified Approach to the Steady-State acking Analyses of Adaptive Filters Nabil R. Yousef, Student Member, IEEE, Ali H. Sayed,
More informationVariable Learning Rate LMS Based Linear Adaptive Inverse Control *
ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of
More informationSparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels
Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute
More informationIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH 2005 1057 Error Whitening Criterion for Adaptive Filtering: Theory and Algorithms Yadunandana N. Rao, Deniz Erdogmus, Member, IEEE, and Jose
More information2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms
2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior
More informationTitle algorithm for active control of mul IEEE TRANSACTIONS ON AUDIO SPEECH A LANGUAGE PROCESSING (2006), 14(1):
Title Analysis of the filtered-x LMS algo algorithm for active control of mul Author(s) Hinamoto, Y; Sakai, H Citation IEEE TRANSACTIONS ON AUDIO SPEECH A LANGUAGE PROCESSING (2006), 14(1): Issue Date
More informationAdaptive MMSE Equalizer with Optimum Tap-length and Decision Delay
Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk
More informationDirection of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego
Direction of Arrival Estimation: Subspace Methods Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu Reference Books and Papers 1 Optimum Array Processing, H L Van Trees 2 Stoica, P,
More information3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE
3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationOn the Convergence of the Inverses of Toeplitz Matrices and Its Applications
180 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 1, JANUARY 2003 On the Convergence of the Inverses of Toeplitz Matrices Its Applications Feng-Wen Sun, Member, IEEE, Yimin Jiang, Member, IEEE,
More informationDominant Pole Localization of FxLMS Adaptation Process in Active Noise Control
APSIPA ASC 20 Xi an Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control Iman Tabatabaei Ardekani, Waleed H. Abdulla The University of Auckland, Private Bag 9209, Auckland, New
More informationEfficient Use Of Sparse Adaptive Filters
Efficient Use Of Sparse Adaptive Filters Andy W.H. Khong and Patrick A. Naylor Department of Electrical and Electronic Engineering, Imperial College ondon Email: {andy.khong, p.naylor}@imperial.ac.uk Abstract
More informationAn Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control
21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 WeC14.1 An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control Qing Liu and Douglas
More informationChapter 2 Fundamentals of Adaptive Filter Theory
Chapter 2 Fundamentals of Adaptive Filter Theory In this chapter we will treat some fundamentals of the adaptive filtering theory highlighting the system identification problem We will introduce a signal
More informationStatistical and Adaptive Signal Processing
r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory
More informationCh4: Method of Steepest Descent
Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number
More informationAdaptiveFilters. GJRE-F Classification : FOR Code:
Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals
More informationMassoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39
Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction
More informationPerformance Analysis of Norm Constraint Least Mean Square Algorithm
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 5, MAY 2012 2223 Performance Analysis of Norm Constraint Least Mean Square Algorithm Guolong Su, Jian Jin, Yuantao Gu, Member, IEEE, and Jian Wang Abstract
More informationAn Adaptive Sensor Array Using an Affine Combination of Two Filters
An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia
More information3.4 Linear Least-Squares Filter
X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum
More informationinear Adaptive Inverse Control
Proceedings of the 36th Conference on Decision & Control San Diego, California USA December 1997 inear Adaptive nverse Control WM15 1:50 Bernard Widrow and Gregory L. Plett Department of Electrical Engineering,
More informationIMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT
IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN Jared K. Thomas Brigham Young University Department of Mechanical Engineering ABSTRACT The application of active noise control (ANC)
More informationBlind Source Separation with a Time-Varying Mixing Matrix
Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,
More informationAdaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling
Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical
More informationA METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION
A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION Jordan Cheer and Stephen Daley Institute of Sound and Vibration Research,
More informationError Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification
American J. of Engineering and Applied Sciences 3 (4): 710-717, 010 ISSN 1941-700 010 Science Publications Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification
More informationParametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes
Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted
More informationOn Information Maximization and Blind Signal Deconvolution
On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate
More informationTHE concept of active sound control has been known
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 2, MARCH 1999 391 Improved Training of Neural Networks for the Nonlinear Active Control of Sound and Vibration Martin Bouchard, Member, IEEE, Bruno Paillard,
More informationA Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong
A Fast Algorithm for Nonstationary Delay Estimation H. C. So Department of Electronic Engineering, City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong Email : hcso@ee.cityu.edu.hk June 19,
More informationarxiv: v1 [cs.sd] 28 Feb 2017
Nonlinear Volterra Model of a Loudspeaker Behavior Based on Laser Doppler Vibrometry Alessandro Loriga, Parvin Moyassari, and Daniele Bernardini Intranet Standard GmbH, Ottostrasse 3, 80333 Munich, Germany
More informationKNOWN approaches for improving the performance of
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas
More informationEIGENFILTERS FOR SIGNAL CANCELLATION. Sunil Bharitkar and Chris Kyriakakis
EIGENFILTERS FOR SIGNAL CANCELLATION Sunil Bharitkar and Chris Kyriakakis Immersive Audio Laboratory University of Southern California Los Angeles. CA 9. USA Phone:+1-13-7- Fax:+1-13-7-51, Email:ckyriak@imsc.edu.edu,bharitka@sipi.usc.edu
More informationThe Kalman filter is arguably one of the most notable algorithms
LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive
More informationAutomatic Stabilization of an Unmodeled Dynamical System Final Report
Automatic Stabilization of an Unmodeled Dynamical System: Final Report 1 Automatic Stabilization of an Unmodeled Dynamical System Final Report Gregory L. Plett and Clinton Eads May 2000. 1 Introduction
More informationA low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao
ISSN: 77-3754 International Journal of Engineering and Innovative echnology (IJEI Volume 1, Issue, February 1 A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation
More informationComparative Performance Analysis of Three Algorithms for Principal Component Analysis
84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.
More informationADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University
ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING Bernard Widrow 1, Gregory Plett, Edson Ferreira 3 and Marcelo Lamego 4 Information Systems Lab., EE Dep., Stanford University Abstract: Many
More informationTHE Back-Prop (Backpropagation) algorithm of Paul
COGNITIVE COMPUTATION The Back-Prop and No-Prop Training Algorithms Bernard Widrow, Youngsik Kim, Yiheng Liao, Dookun Park, and Aaron Greenblatt Abstract Back-Prop and No-Prop, two training algorithms
More informationSPEECH ENHANCEMENT USING PCA AND VARIANCE OF THE RECONSTRUCTION ERROR IN DISTRIBUTED SPEECH RECOGNITION
SPEECH ENHANCEMENT USING PCA AND VARIANCE OF THE RECONSTRUCTION ERROR IN DISTRIBUTED SPEECH RECOGNITION Amin Haji Abolhassani 1, Sid-Ahmed Selouani 2, Douglas O Shaughnessy 1 1 INRS-Energie-Matériaux-Télécommunications,
More informationA Minimum Error Entropy criterion with Self Adjusting. Step-size (MEE-SAS)
Signal Processing - EURASIP (SUBMIED) A Minimum Error Entropy criterion with Self Adjusting Step-size (MEE-SAS) Seungju Han *, Sudhir Rao *, Deniz Erdogmus, Kyu-Hwa Jeong *, Jose Principe * Corresponding
More informationPOLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS
POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel
More informationRemoving ECG-contamination from EMG signals
ecg_cleansing.sdw 4/4/03 1/12 Removing ECG-contamination from EMG signals 1. Introduction As is well known the heart also creates strong electromagnetic fields. The electric potential from the heart can
More informationUSING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo
th European Signal Processing Conference (EUSIPCO 1 Bucharest, Romania, August 7-31, 1 USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS Toby Christian
More informationPerformance analysis and design of FxLMS algorithm in broadband ANC system with online secondary-path modeling
Title Performance analysis design of FxLMS algorithm in broadb ANC system with online secondary-path modeling Author(s) Chan, SC; Chu, Y Citation IEEE Transactions on Audio, Speech Language Processing,
More informationLecture: Adaptive Filtering
ECE 830 Spring 2013 Statistical Signal Processing instructors: K. Jamieson and R. Nowak Lecture: Adaptive Filtering Adaptive filters are commonly used for online filtering of signals. The goal is to estimate
More informationStochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno
Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.
More informationExploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability
Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,
More informationGENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1
GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS Mitsuru Kawamoto,2 and Yuiro Inouye. Dept. of Electronic and Control Systems Engineering, Shimane University,
More informationHirschman Optimal Transform Block LMS Adaptive
Provisional chapter Chapter 1 Hirschman Optimal Transform Block LMS Adaptive Hirschman Optimal Transform Block LMS Adaptive Filter Filter Osama Alkhouli, Osama Alkhouli, Victor DeBrunner and Victor DeBrunner
More informationProf. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides
Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX
More informationLeast Mean Squares Regression. Machine Learning Fall 2018
Least Mean Squares Regression Machine Learning Fall 2018 1 Where are we? Least Squares Method for regression Examples The LMS objective Gradient descent Incremental/stochastic gradient descent Exercises
More information7.7 The Schottky Formula for Shot Noise
110CHAPTER 7. THE WIENER-KHINCHIN THEOREM AND APPLICATIONS 7.7 The Schottky Formula for Shot Noise On p. 51, we found that if one averages τ seconds of steady electron flow of constant current then the
More informationLeast Mean Squares Regression
Least Mean Squares Regression Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Lecture Overview Linear classifiers What functions do linear classifiers express? Least Squares Method
More information1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s
Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk
More informationFAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE
Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology
More informationStochastic error whitening algorithm for linear filter estimation with noisy data
Neural Networks 16 (2003) 873 880 2003 Special issue Stochastic error whitening algorithm for linear filter estimation with noisy data Yadunandana N. Rao*, Deniz Erdogmus, Geetha Y. Rao, Jose C. Principe
More informationNONLINEAR ECHO CANCELLATION FOR HANDS-FREE SPEAKERPHONES. Bryan S. Nollett and Douglas L. Jones
NONLINEAR ECHO CANCELLATION FOR HANDS-FREE SPEAKERPHONES Bryan S. Nollett and Douglas L. Jones Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1308 W. Main St. Urbana, IL 61801
More informationSpatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego
Spatial Smoothing and Broadband Beamforming Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books and Papers 1. Optimum Array Processing, H. L. Van Trees 2. Stoica, P.,
More informationOptimal and Adaptive Filtering
Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69 Table of Contents 1
More informationAnalysis of stochastic gradient tracking of time-varying polynomial Wiener systems
Analysis of stochastic gradient tracking of time-varying polynomial Wiener systems Author Celka, Patrick Published 2000 Journal Title IEEE Transaction on Signal Processing DOI https://doi.org/10.1109/78.845925
More informationwhere A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ
BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA
More informationAN ALTERNATIVE FEEDBACK STRUCTURE FOR THE ADAPTIVE ACTIVE CONTROL OF PERIODIC AND TIME-VARYING PERIODIC DISTURBANCES
Journal of Sound and Vibration (1998) 21(4), 517527 AN ALTERNATIVE FEEDBACK STRUCTURE FOR THE ADAPTIVE ACTIVE CONTROL OF PERIODIC AND TIME-VARYING PERIODIC DISTURBANCES M. BOUCHARD Mechanical Engineering
More informationPerformance Analysis of an Adaptive Algorithm for DOA Estimation
Performance Analysis of an Adaptive Algorithm for DOA Estimation Assimakis K. Leros and Vassilios C. Moussas Abstract This paper presents an adaptive approach to the problem of estimating the direction
More informationLMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.
Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller
More informationRevision of Lecture 4
Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel
More informationReduced-cost combination of adaptive filters for acoustic echo cancellation
Reduced-cost combination of adaptive filters for acoustic echo cancellation Luis A. Azpicueta-Ruiz and Jerónimo Arenas-García Dept. Signal Theory and Communications, Universidad Carlos III de Madrid Leganés,
More information