Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Size: px
Start display at page:

Download "Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112"

Transcription

1 Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS adaptive lter can be implemented either directly or by adding random white noise to the input signal of the LMS adaptive lter. In this paper, we analyze and compare the mean-square performances of these two adaptive lter implementations for system identication tasks with zero mean i.i.d. input signals. Our results indicate that the performance of the direct implementation is superior to that of the random noise implementation in all respects. However, for small leakage factors, these performance dierences are negligible. Simulations verify the results of the analysis and the conclusions drawn. submitted to IEEE TRANSACTIONS ON SIGNAL PROCESSING EDICS Category No. SP.6.4 Please address correspondence to: Scott C. Douglas, Department of Electrical Engineering, University of Utah, Salt Lake City, UT (801) FAX: (801) Electronic mail address: World Wide Web: 0 Permission of the IEEE to publish this abstract separately is granted. 0 This research was supported in part by NSF Grant No. MIP

2 1 Introduction The leaky least-mean-square (LMS) adaptive lter is a useful variant of the LMS adaptive lter for several communications and signal processing tasks [1,, 3, 4, 5]. The leaky LMS coecient update is given by W k+1 = (1? )W k + e k X k (1) e k = d k? W T k X k ; () where W k = [w 0;k w 1;k w L?1;k] T is the L-dimensional coecient vector, X k = [x k x k?l+1] T is the input signal vector, d k is the desired response signal, e k is the error signal, is the step size, and is the leakage parameter. For = 0, equations (1){() describe the coecient updates for the standard LMS adaptive lter. An approximate analysis of the updates in (1){() for > 0 shows that the coecients slowly decay towards zero values if the desired response signal is uncorrelated with the input signal vector []. Recently, a more accurate analysis of the leaky LMS algorithm for jointly Gaussian input and desired response signals has enabled useful comparisons of the behaviors of the leaky LMS and LMS adaptive lters to be made [6]. It is well-known that a stochastic variant of the leaky LMS adaptive lter can be implemented by adding zero-mean random white noise to the input signal prior to the application of the LMS adaptive lter [7]. The resulting coecient updates are W k+1 = W k + e k X k (3) e k = d k? W T k X k ; (4) where the noisy input signal vector X k is dened as X k = X k + M k ; (5) and M k = [m k m k?l+1] T is a noise vector with zero-mean uncorrelated elements. If the noise power m = E[m ] is chosen as k m = ; (6) then it can be shown that the mean behaviors of the algorithms in (1){() and (3){(4) are approximately the same. If the noise signal m k can be easily generated (using a maximum-length shift register in VLSI hardware, for example), then this stochastic implementation of the leaky LMS adaptive lter avoids the L multiplies used to compute the scaled coecient vector (1? )W k in (1). However, to obtain an error signal that is free of the noise introduced within the adaptation process, the error e(n) in () must be computed, and thus the two algorithms in (1){() and 1

3 (){(4) are of similar complexity. Note that the algorithm in (){(4) is also obtained in situations where the input signal is dithered prior to sampling, and thus this alternative implementation is of practical interest in its own right. Even though the mean behaviors of the two algorithms are similar, it is not clear how the mean-square performances of the two algorithms dier in any particular situation. The mean-square behavior of an adaptive lter is a more accurate measure of its overall performance and stability characteristics than its mean behavior; thus, it is not clear which algorithm is to be preferred in any particular situation. In this paper, we compare the mean-square performances of the two algorithms in (1){() and (){(4), respectively, assuming a system identication desired response signal model of the form d k = WoptX T k + n k ; (7) where W opt is the optimal coecient vector, n k is an uncorrelated noise signal, and x k and m k are assumed to be independent, identically-distributed (i.i.d.) signals with even-symmetric probability density functions p X (x) and p M (m), respectively. In addition, we shall also assume that vectors within the sequences fx k g and fm k g are independent of each other and of the noise sequence n k. Such assumptions are similar to the independence assumptions often used in analyses of this sort [7, 8]. While never true for an FIR lter conguration, these assumptions lead to reasonably accurate descriptions of the adaptation behaviors of the algorithms, and they allow meaningful comparisons of dierent algorithms to be made. Through our analyses, we show the following: For any particular value of, the range of stable step sizes for the LMS adaptive lter with noisy inputs is smaller than that for the leaky LMS adaptive lter. For any stable values of and, the LMS adaptive lter with noisy inputs converges no faster than the leaky LMS adaptive lter. For any stable values of and, the LMS adaptive lter with noisy inputs has a higher steady-state excess mean-square error (MSE) than that for the leaky LMS adaptive lter. Thus, from a performance standpoint, the leaky LMS adaptive lter is to be preferred in this situation. Simulations verify the analytical results and the above conclusions. Analysis.1 Leaky LMS Adaptive Filter For our analyses, we dene the coecient error vector as V k = W k? W opt : (8)

4 With this denition, we can write the leaky LMS adaptive lter update in (1){() as V k+1 = ((1? )I? X k X T k )V k + n k X k? W opt : (9) Taking expectations of both sides of (9) and employing our assumptions, we nd that where = x E[x ] for our input signal model. k E[V k+1 ] = (1? ( x + ))E[V k]? W opt ; (10) To determine a description for the mean-square behavior of (1){(), we post-multiply both sides of (9) by their respective transposes and take expectations of both sides of the resulting equation. This operation results in E[V k+1 V T k+1 ] = E[((1? )I? X kx T )V k kvk T ((1? )I? X kx T? k )] + n x I + W opt Wopt T?? E[((1? )I? X k X T )V k k]wopt T + W opte[vk T ((1? )I? X kx T k )] ; (11) where n = E[n ]. Employing the assumptions described above and relying on results already k available for the LMS adaptive lter with i.i.d. inputs [8], we can take the trace of both sides of (11). After simplifying the result, we nd the update given by tre[v k+1 V T ] = k+1 1? ( + ) + x ((L? 1 + x ) 4 + x + x ) tre[v k V T ] k + L x n + jjw opt jj? (1? ( x + ))W T opt E[V k]; (1) where x = E[x 4 k ]=4 x. Note that this update depends on the mean coecient error vector E[V k ]. We can dene the (L + 1)-dimensional vectors Y k and B and matrix A as " tre[vk V Y k = T ] # " # k ; B = ; (13) E[V k ] (L x n + jjw opt jj )?W opt " 1? ( x + ) + A = ((L? 1 + x ) 4 x + + x )?(1? ( x + ))W T opt 0 (1? ( x + ))I # ; (14) respectively. With these denitions, we can represent the updates in (10) and (1) as Y k+1 = AY k + B: (15) Note that the excess MSE at time k is given by M SE;k = E[(V T k X k ) ] = xtre[v k V T k ]; (16) where tre[v k V T ] is the rst entry of Y k k. Thus, the form of (15) allows us to determine both the transient and steady-state mean-square behaviors of the leaky LMS adaptive lter for a known initial state vector Y 0. 3

5 For stability, we can determine the values of and that guarantee that all of the eigenvalues of A are less than one in magnitude. Because of the triangular form of A in (14), this transition matrix has L eigenvalues equal to 1? ( x + ) and one eigenvalue equal to the rst entry in the matrix. It can be easily shown that a necessary and sucient condition to guarantee all of the eigenvalues of A to be less than one is j1? ( x + ) + ((L? 1 + x ) 4 x + + x )j < 1; (17) from which we determine the mean-square stability conditions on to be 0 < < ( + ) x (L? 1 + x ) 4 + : (18) x + x For step sizes that satisfy (18), we can solve for the steady-state excess MSE by rst calculating lim Y k = (I? A)?1 B; (19) k!1 from which the excess MSE is simply the rst entry of the steady-state value of Y k scaled by x. Using the form of A and B in (13){(14), we nd after some algebra that lim Y k = k!1 where we have dened 6 4 +? L x n + (L? + x ) 4 x ( x + )? ((L? 1 + x ) 4 x + x + )? x + W opt ; (0) = ( x + ) jjw optjj : (1) From this result, we nd that the steady-state excess MSE for the leaky LMS adaptive lter is? M SE;ss = 4 x L n + (L? + x ) x x + ( x + )? ((L? 1 + x ) 4 x + + x ) : () The rst term on the right-hand-side of () is the excess MSE due to the bias in the lter coecients in steady-state. The second term is the excess MSE due to coecient uctuations caused by a nonzero adaptation speed.. LMS Adaptive Filter with Noisy Input Signals We now analyze the behavior of the LMS adaptive lter with noisy input signals. Similar to the leaky LMS adaptive lter, we write the coecient updates in (3){(4) in terms of the coecient error vector as V k+1 = (I? X k X T k )V k + n k X k? X k M T k W opt ; (3) 4

6 where X k is as dened in (5). Taking expectations of both sides and using our assumptions, we determine that the mean coecient error vector evolves according to Thus, if m m = in what follows. E[V k+1 ] = (1? ( x + m ))E[V k]? mw opt : (4) =, the evolution equation in (4) is the same as that in (10). Therefore, we set To determine a mean-square description of the adaptation behavior of this algorithm, we postmultiply both sides of (3) by their respective transposes and take expectations of boths sides. The resulting relation is E[V k+1 Vk+1] T = E[(I? X k X T k )V k V T k (I? X k X T k )] + ( x + ) I n E[(I? X k X Tk )V k W ToptM k X Tk ] + E[X k M Tk W opt V Tk (I? X k X Tk )]? + E[X k M T W k opt WoptM T k X T ]: (5) k Similar to the rst analysis, we take the trace of both sides of (5) and employ our analysis assumptions to simplify the resulting equation, as given by tre[v k+1 V? T k+1 ] = 1? (x + ) + ((L? 1 + x )x 4 + (L + ) x + (L? 1 + m) ) tre[v k Vk T? ]? 1? ((L + )x + (L? 1 + m )) Wopt T E[V k] where we have dened m = E[m 4 k ]=. + L( x + ) n + (L x + (L? 1 + m))jjw opt jj ; (6) As in the previous case, we can describe the mean-square evolution equation for the LMS adaptive lter with noisy input signals as Y k+1 = AY k + B; (7) where A and B are given by " # (L + 1) A = A + x + (L? + m ) ((L + 1) x + (L? + m ))W T opt (8) 0 0 " B = B + n L + # (L x + (L? + m ))jjw opt jj ; (9) 0 respectively. As in the case of the rst analysis, the stability of the LMS adaptive lter with noisy input signals is guaranteed if the rst element of A is less than one in magnitude. This criterion results in step size bounds of 0 < < ( x + ) (L? 1 + x ) 4 x + (L + ) + (L? 1 + x m): (30) 5

7 In addition, we can solve for the stationary point of (7) for stable step sizes in a similar manner as before; the resulting excess MSE for the LMS adaptive lter with noisy inputs is M SE;ss = x + x(l( x + ) n + (L 4 x?1 + (L? 6 + x + m ) x + L) x ) ( x + )? ((L? 1 + x) 4 x + (L + ) x + (L? 1 + m) ) : (31) Using these results, we can compare the mean-square behaviors of the two adaptive lters. 3 Performance Comparison Examining the upper step size bounds in (18) and (30), we note that the denominator of the upper bound in (30) is always greater than that of (18). Thus, the range of stable step sizes for the LMS adaptive lter with noisy inputs is in general smaller than that for the leaky LMS adaptive lter, and the dierence between these two ranges increases if either L,, or m increases. As for the convergence rates of the two systems, we note that the transition matrices A and A share L eigenvalues, and thus we can compare the eigenvalues represented by the rst entries of both matrices directly. From (8), we see that the rst entry of A is always larger than the corresponding entry of A for positive leakage factors. Since both of these entries are always positive, the convergence speed of the LMS adaptive lter is always slower than the convergence speed of the leaky LMS adaptive lter with the same step size and leakage factor. The dierence in convergence speeds becomes negligible as the step size and leakage factors are decreased, however, and thus we can conclude that the two adaptive systems converge at nearly the same rate if both and are suitably small-valued. We now compare the steady-state excess MSE of the two adaptive systems. We can express M SE;ss in () as M SE;ss = x + c 1 c ; (3) where c 1 and c are the numerator and denominator, respectively, of the second term on the righthand-side of (). Comparing this equation with (31), we see that M SE;ss? x = c? 1 + x L n + (L 4 x?1 + (L? 4 + m ) x + L) x : (33) c? ((L + 1) x + (L? + m )) For any value of L and any distribution of m k, the numerator and denominator of the right-handside of (33) will be larger than c 1 and smaller than c, respectively. Thus, the steady-state excess MSE of the LMS adaptive lter with noisy inputs is always greater than that of the leaky LMS adaptive lter with the same values of and. Combining the above results, we see that the noisy LMS adaptive lter always performs more poorly than the leaky LMS adaptive lter for i.i.d. input signals. The dierence in performance will in general increase if either, m, and/or L are increased. 6

8 4 Simulation Results We now verify the above conclusions via simulation. An L = 50 coecient system identication task was chosen for these simulations, where W opt = [1 1 1] T. The input and observation noise signals were both chosen to be zero-mean white Gaussian with x = 1 and n = 0:00001, respectively. For the noisy LMS adaptive lter, the noise signal m k was chosen to be zero-mean white Gaussian-distributed. One hundred simulation runs were averaged in each case. Figure 1 shows the convergence of the total coecient error power tre[v k V T ] for the leaky k LMS, noisy LMS, and LMS adaptive lters in this situation, where = 0:01 and = 10?5. As can be seen, the leaky LMS adaptive lter's convergence rate is nearly the same as that for the LMS adaptive lter with noisy inputs; however, the steady-state coecient error power is about 50 times larger for the noisy LMS adaptive lter. Also plotted are the convergence behaviors of the systems as obtained from the analyses, showing that the theory accurately predicts convergence behavior. Figure shows the total coecient error powers in steady state for the three algorithms as a function of for = 10?5. As can be seen, the LMS adaptive lter with noisy inputs has a larger error in steady-state as compared to that of the leaky LMS and LMS adaptive lters. Simulated behavior closely matches the theoretical predictions of performance. Since both of the leaky LMS adaptive lters converge at nearly the same rate for small step sizes, a useful quantity for comparison purposes is the fraction of the additional steady-state excess MSE due to coecient uctuations for the noisy LMS adaptive lter with respect to that of the leaky LMS adaptive lter, given by = M SE;ss? M SE;ss : (34) M SE;ss? x Figure 3 plots as a function of for the case described above, with = 0:01. Clearly, the behavior of follows two dierent trends for small and large leakage factors, respectively. It can be shown for large signal-to-noise ratios xjjw opt jj = n 1 and for x that xjjw opt jj x n + jjw opt jj : (35) Moreover, the maximum value of occurs at = x n =jjw opt jj and is approximately max 1 p SNR d ; (36) where SNR d = xjjw opt jj = n is the signal-to-noise ratio of d k. We can see that the analysis is accurate in predicting simulated behavior, as indicated by the simulation results. It should be noted from (35) that the value of tends to zero as the leakage factor is reduced. If one considers = 0:1 to be an acceptable fraction of excess MSE that can be tolerated in the 7

9 adaptation process, then for values of satisfying x > 10 SNR d ; (37) both the leaky LMS and noisy LMS adaptive lters have similar behaviors. In other words, if the input-signal-power-to-leakage ratio x = is large relative to the signal-to-noise ratio of d k, either implementation gives satisfactory performance. 5 Conclusions In this paper, we have provided an analysis of two competing implementations of the leaky LMS adaptive lter. We have shown through both theory and simulation that by adding noise to the input signal of the the LMS adaptive lter, one obtains a system whose mean behavior is similar to that of the leaky LMS adaptive lter. However, for every mean-square performance criterion studied, the mean-square behavior of this adaptive lter is worse than that of the leaky LMS adaptive lter, and this performance dierence is particularly large for large signal-to-noise ratios and moderate values of the leakage factor. We have also identied the range of leakage factors for which both implementations perform satisfactorily. Simulations verify the analyses and indicate the magnitude of the performance dierences. References [1] D.L. Cohn and J.L. Melson, \The residual encoder: An improved ADPCM system for speech digitization," IEEE Trans. Comm., vol. COMM-3, no. 9, pp , September [] R.D. Gitlin, H.C. Meadors, and S.B. Weinstein, \The tap-leakage algorithm: An algorithm for the stable operation of a digitally-implemented fractionally-spaced adaptive equalizer," Bell Sys. Tech. J., vol. 61, no. 10, pp , October 198. [3] J.R Treichler, C.R. Johnson, Jr., and M. G. Larimore, Theory and Design of Adaptive Filters (New York: Wiley-Interscience, 1987). [4] J.R. Gonzal, R.R. Bitmead, and C.R. Johnson, Jr., \The dynamics of bursting in simple adaptive feedback systems with leakage," IEEE Trans. Circuits Systems, vol. CAS-38, no. 5, pp , May [5] J. Cio and Y.-S. Byun, \Adaptive lters," in Handbook of Digital Signal Processing, S.K. Mitra and J.F. Kaiser, eds. (New York: Wiley, 1993), pp [6] K.A. Mayyas and T. Aboulnasr, \The leaky LMS algorithm: MSE analysis for Gaussian data," accepted for publication in IEEE Trans. Signal Processing; to appear. [7] S.S. Haykin, Adaptive Filter Theory, 3rd. Ed., (Englewood Clis, NJ: Prentice-Hall, 1995). [8] W.A. Gardner, \Learning characteristics of stochastic-gradient-descent algorithms: A general study, analysis, and critique," Signal Processing, vol. 6, no., pp , April

10 List of Figures Figure 1: Convergence of total coecient error power, theory and simulation: leaky LMS, noisy LMS, and LMS adaptive lters, white Gaussian input signals, = 0:01, = 0: Figure : Steady-state total coecient error power as a function of, theory and simulation: leaky LMS, noisy LMS, and LMS adaptive lters, white Gaussian input signals, = 0: Figure 3: Penalty factor as a function of the leakage factor, leaky and noisy LMS adaptive lters, = 0:01. 9

11 10 Total Coefficient Error Power Theory Leaky LMS (Sim.) Noisy LMS (Sim.) LMS (Sim.) number of iterations Figure 1: Convergence of total coecient error power, theory and simulation: leaky LMS, noisy LMS, and LMS adaptive lters, white Gaussian input signals, = 0:01, = 0:

12 10 - Steady-State Coefficient Error Power Theory Leaky LMS (Sim.) Noisy LMS (Sim.) LMS (Sim.) mu Figure : Steady-state total coecient error power as a function of, theory and simulation: leaky LMS, noisy LMS, and LMS adaptive lters, white Gaussian input signals, = 0:

13 Theory Simulation Eq. (35) alpha gamma Figure 3: Penalty factor as a function of the leakage factor, leaky and noisy LMS adaptive lters, = 0:01. 1

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl

More information

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University

More information

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract Published in: Advances in Neural Information Processing Systems 8, D S Touretzky, M C Mozer, and M E Hasselmo (eds.), MIT Press, Cambridge, MA, pages 190-196, 1996. Learning with Ensembles: How over-tting

More information

Assesment of the efficiency of the LMS algorithm based on spectral information

Assesment of the efficiency of the LMS algorithm based on spectral information Assesment of the efficiency of the algorithm based on spectral information (Invited Paper) Aaron Flores and Bernard Widrow ISL, Department of Electrical Engineering, Stanford University, Stanford CA, USA

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

A Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong

A Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong A Fast Algorithm for Nonstationary Delay Estimation H. C. So Department of Electronic Engineering, City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong Email : hcso@ee.cityu.edu.hk June 19,

More information

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms 2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior

More information

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least

More information

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

More information

On the Stability of the Least-Mean Fourth (LMF) Algorithm

On the Stability of the Least-Mean Fourth (LMF) Algorithm XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We

More information

Adaptive Filtering Part II

Adaptive Filtering Part II Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,

More information

Simulation studies of the standard and new algorithms show that a signicant improvement in tracking

Simulation studies of the standard and new algorithms show that a signicant improvement in tracking An Extended Kalman Filter for Demodulation of Polynomial Phase Signals Peter J. Kootsookos y and Joanna M. Spanjaard z Sept. 11, 1997 Abstract This letter presents a new formulation of the extended Kalman

More information

NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters

NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters R. K. Martin and C. R. Johnson, Jr. School of Electrical Engineering Cornell University Ithaca, NY 14853 {frodo,johnson}@ece.cornell.edu

More information

IMPROVED NOISE CANCELLATION IN DISCRETE COSINE TRANSFORM DOMAIN USING ADAPTIVE BLOCK LMS FILTER

IMPROVED NOISE CANCELLATION IN DISCRETE COSINE TRANSFORM DOMAIN USING ADAPTIVE BLOCK LMS FILTER SANJAY KUMAR GUPTA* et al. ISSN: 50 3676 [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY Volume-, Issue-3, 498 50 IMPROVED NOISE CANCELLATION IN DISCRETE COSINE TRANSFORM DOMAIN

More information

Blind Source Separation with a Time-Varying Mixing Matrix

Blind Source Separation with a Time-Varying Mixing Matrix Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,

More information

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University EDICS category SP 1 A Subspace Approach to Estimation of Autoregressive Parameters From Noisy Measurements 1 Carlos E Davila Electrical Engineering Department, Southern Methodist University Dallas, Texas

More information

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1. Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller

More information

DESIGN OF QUANTIZED FIR FILTER USING COMPENSATING ZEROS

DESIGN OF QUANTIZED FIR FILTER USING COMPENSATING ZEROS DESIGN OF QUANTIZED FIR FILTER USING COMPENSATING ZEROS Nivedita Yadav, O.P. Singh, Ashish Dixit Department of Electronics and Communication Engineering, Amity University, Lucknow Campus, Lucknow, (India)

More information

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North

More information

THE LEAKY least-mean-squares (leaky LMS) algorithm

THE LEAKY least-mean-squares (leaky LMS) algorithm I TRANSACTIONS ON SIGNAL PROCSSING, VOL. 47, NO. 12, DCMBR 1999 3261 Unbiased Stable Leakage-Based Adaptive Filters Vítor H. Nascimento Ali H. Sayed, Senior Member, I Abstract The paper develops a leakage-based

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

Ch5: Least Mean-Square Adaptive Filtering

Ch5: Least Mean-Square Adaptive Filtering Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm

More information

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology

More information

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang ICIC Express Letters ICIC International c 2009 ISSN 1881-803X Volume 3, Number 3, September 2009 pp. 1 6 STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION Badong Chen,

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

Transient Analysis of Data-Normalized Adaptive Filters

Transient Analysis of Data-Normalized Adaptive Filters IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 51, NO 3, MARCH 2003 639 Transient Analysis of Data-Normalized Adaptive Filters Tareq Y Al-Naffouri and Ali H Sayed, Fellow, IEEE Abstract This paper develops

More information

An Adaptive Sensor Array Using an Affine Combination of Two Filters

An Adaptive Sensor Array Using an Affine Combination of Two Filters An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia

More information

A UNIFIED FRAMEWORK FOR MULTICHANNEL FAST QRD-LS ADAPTIVE FILTERS BASED ON BACKWARD PREDICTION ERRORS

A UNIFIED FRAMEWORK FOR MULTICHANNEL FAST QRD-LS ADAPTIVE FILTERS BASED ON BACKWARD PREDICTION ERRORS A UNIFIED FRAMEWORK FOR MULTICHANNEL FAST QRD-LS ADAPTIVE FILTERS BASED ON BACKWARD PREDICTION ERRORS César A Medina S,José A Apolinário Jr y, and Marcio G Siqueira IME Department o Electrical Engineering

More information

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases Phil Schniter Nov. 0, 001 Abstract In this report we combine the approach of Yousef and Sayed [1] with that of Rupp and Sayed

More information

Ch6-Normalized Least Mean-Square Adaptive Filtering

Ch6-Normalized Least Mean-Square Adaptive Filtering Ch6-Normalized Least Mean-Square Adaptive Filtering LMS Filtering The update equation for the LMS algorithm is wˆ wˆ u ( n 1) ( n) ( n) e ( n) Step size Filter input which is derived from SD as an approximation

More information

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.Box 5400,

More information

Linear stochastic approximation driven by slowly varying Markov chains

Linear stochastic approximation driven by slowly varying Markov chains Available online at www.sciencedirect.com Systems & Control Letters 50 2003 95 102 www.elsevier.com/locate/sysconle Linear stochastic approximation driven by slowly varying Marov chains Viay R. Konda,

More information

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801 Rate-Distortion Based Temporal Filtering for Video Compression Onur G. Guleryuz?, Michael T. Orchard y? University of Illinois at Urbana-Champaign Beckman Institute, 45 N. Mathews Ave., Urbana, IL 68 y

More information

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 135 New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks Martin Bouchard,

More information

c 21 w 2 c 22 s 2 c 12

c 21 w 2 c 22 s 2 c 12 Blind Adaptive Cross-Pole Interference Cancellation Using Fractionally-Spaced CMA Wonzoo Chung, John Treichler y, and C. Richard Johnson, Jr. wonzoo(johnson)@ee.cornell.edu y jrt@appsig.com School of Elec.

More information

Steady-state performance analysis of a variable tap-length LMS algorithm

Steady-state performance analysis of a variable tap-length LMS algorithm Loughborough University Institutional Repository Steady-state performance analysis of a variable tap-length LMS algorithm This item was submitted to Loughborough University's Institutional Repository by

More information

Variable, Step-Size, Block Normalized, Least Mean, Square Adaptive Filter: A Unied Framework

Variable, Step-Size, Block Normalized, Least Mean, Square Adaptive Filter: A Unied Framework Scientia Iranica, Vol. 15, No. 2, pp 195{202 c Sharif University of Technology, April 2008 Research Note Variable, Step-Size, Block Normalized, Least Mean, Square Adaptive Filter: A Unied Framework M.

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

MULTICHANNEL BLIND SEPARATION AND. Scott C. Douglas 1, Andrzej Cichocki 2, and Shun-ichi Amari 2

MULTICHANNEL BLIND SEPARATION AND. Scott C. Douglas 1, Andrzej Cichocki 2, and Shun-ichi Amari 2 MULTICHANNEL BLIND SEPARATION AND DECONVOLUTION OF SOURCES WITH ARBITRARY DISTRIBUTIONS Scott C. Douglas 1, Andrzej Cichoci, and Shun-ichi Amari 1 Department of Electrical Engineering, University of Utah

More information

Applications and fundamental results on random Vandermon

Applications and fundamental results on random Vandermon Applications and fundamental results on random Vandermonde matrices May 2008 Some important concepts from classical probability Random variables are functions (i.e. they commute w.r.t. multiplication)

More information

/97/$10.00 (c) 1997 AACC

/97/$10.00 (c) 1997 AACC Optimal Random Perturbations for Stochastic Approximation using a Simultaneous Perturbation Gradient Approximation 1 PAYMAN SADEGH, and JAMES C. SPALL y y Dept. of Mathematical Modeling, Technical University

More information

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing Kernel PCA Pattern Reconstruction via Approximate Pre-Images Bernhard Scholkopf, Sebastian Mika, Alex Smola, Gunnar Ratsch, & Klaus-Robert Muller GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany fbs,

More information

On the steady-state mean squared error of the fixed-point LMS algorithm

On the steady-state mean squared error of the fixed-point LMS algorithm Signal Processing 87 (2007) 3226 3233 Fast communication On the steady-state mean squared error of the fixed-point LMS algorithm Mohamed Ghanassi, Benoıˆt Champagne, Peter Kabal Department of Electrical

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk

More information

Comparison of DDE and ETDGE for. Time-Varying Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong

Comparison of DDE and ETDGE for. Time-Varying Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong Comparison of DDE and ETDGE for Time-Varying Delay Estimation H. C. So Department of Electronic Engineering, City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong Email : hcso@ee.cityu.edu.hk

More information

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization 1 Shihab Jimaa Khalifa University of Science, Technology and Research (KUSTAR) Faculty

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

ACTIVE noise control (ANC) ([1], [2]) is an established

ACTIVE noise control (ANC) ([1], [2]) is an established 286 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005 Convergence Analysis of a Complex LMS Algorithm With Tonal Reference Signals Mrityunjoy Chakraborty, Senior Member, IEEE,

More information

A new fast algorithm for blind MA-system identication. based on higher order cumulants. K.D. Kammeyer and B. Jelonnek

A new fast algorithm for blind MA-system identication. based on higher order cumulants. K.D. Kammeyer and B. Jelonnek SPIE Advanced Signal Proc: Algorithms, Architectures & Implementations V, San Diego, -9 July 99 A new fast algorithm for blind MA-system identication based on higher order cumulants KD Kammeyer and B Jelonnek

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Performance analysis and design of FxLMS algorithm in broadband ANC system with online secondary-path modeling

Performance analysis and design of FxLMS algorithm in broadband ANC system with online secondary-path modeling Title Performance analysis design of FxLMS algorithm in broadb ANC system with online secondary-path modeling Author(s) Chan, SC; Chu, Y Citation IEEE Transactions on Audio, Speech Language Processing,

More information

Binary Step Size Variations of LMS and NLMS

Binary Step Size Variations of LMS and NLMS IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume, Issue 4 (May. Jun. 013), PP 07-13 e-issn: 319 400, p-issn No. : 319 4197 Binary Step Size Variations of LMS and NLMS C Mohan Rao 1, Dr. B

More information

On the Average Crossing Rates in Selection Diversity

On the Average Crossing Rates in Selection Diversity PREPARED FOR IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (ST REVISION) On the Average Crossing Rates in Selection Diversity Hong Zhang, Student Member, IEEE, and Ali Abdi, Member, IEEE Abstract This letter

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao ISSN: 77-3754 International Journal of Engineering and Innovative echnology (IJEI Volume 1, Issue, February 1 A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation

More information

Relative Irradiance. Wavelength (nm)

Relative Irradiance. Wavelength (nm) Characterization of Scanner Sensitivity Gaurav Sharma H. J. Trussell Electrical & Computer Engineering Dept. North Carolina State University, Raleigh, NC 7695-79 Abstract Color scanners are becoming quite

More information

ADAPTIVE signal processing algorithms (ASPA s) are

ADAPTIVE signal processing algorithms (ASPA s) are IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 12, DECEMBER 1998 3315 Locally Optimum Adaptive Signal Processing Algorithms George V. Moustakides Abstract We propose a new analytic method for comparing

More information

y(n) Time Series Data

y(n) Time Series Data Recurrent SOM with Local Linear Models in Time Series Prediction Timo Koskela, Markus Varsta, Jukka Heikkonen, and Kimmo Kaski Helsinki University of Technology Laboratory of Computational Engineering

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997 798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 44, NO 10, OCTOBER 1997 Stochastic Analysis of the Modulator Differential Pulse Code Modulator Rajesh Sharma,

More information

Convolutive Blind Source Separation based on Multiple Decorrelation. Lucas Parra, Clay Spence, Bert De Vries Sarno Corporation, CN-5300, Princeton, NJ

Convolutive Blind Source Separation based on Multiple Decorrelation. Lucas Parra, Clay Spence, Bert De Vries Sarno Corporation, CN-5300, Princeton, NJ Convolutive Blind Source Separation based on Multiple Decorrelation. Lucas Parra, Clay Spence, Bert De Vries Sarno Corporation, CN-5300, Princeton, NJ 08543 lparra j cspence j bdevries @ sarno.com Abstract

More information

SNR lidar signal improovement by adaptive tecniques

SNR lidar signal improovement by adaptive tecniques SNR lidar signal improovement by adaptive tecniques Aimè Lay-Euaille 1, Antonio V. Scarano Dipartimento di Ingegneria dell Innovazione, Univ. Degli Studi di Lecce via Arnesano, Lecce 1 aime.lay.euaille@unile.it

More information

1 Introduction Consider the following: given a cost function J (w) for the parameter vector w = [w1 w2 w n ] T, maximize J (w) (1) such that jjwjj = C

1 Introduction Consider the following: given a cost function J (w) for the parameter vector w = [w1 w2 w n ] T, maximize J (w) (1) such that jjwjj = C On Gradient Adaptation With Unit-Norm Constraints Scott C. Douglas 1, Shun-ichi Amari 2, and S.-Y. Kung 3 1 Department of Electrical Engineering, Southern Methodist University Dallas, Texas 75275 USA 2

More information

MMSE Decision Feedback Equalization of Pulse Position Modulated Signals

MMSE Decision Feedback Equalization of Pulse Position Modulated Signals SE Decision Feedback Equalization of Pulse Position odulated Signals AG Klein and CR Johnson, Jr School of Electrical and Computer Engineering Cornell University, Ithaca, NY 4853 email: agk5@cornelledu

More information

Parameter Derivation of Type-2 Discrete-Time Phase-Locked Loops Containing Feedback Delays

Parameter Derivation of Type-2 Discrete-Time Phase-Locked Loops Containing Feedback Delays Parameter Derivation of Type- Discrete-Time Phase-Locked Loops Containing Feedback Delays Joey Wilson, Andrew Nelson, and Behrouz Farhang-Boroujeny joey.wilson@utah.edu, nelson@math.utah.edu, farhang@ece.utah.edu

More information

Experimental evidence showing that stochastic subspace identication methods may fail 1

Experimental evidence showing that stochastic subspace identication methods may fail 1 Systems & Control Letters 34 (1998) 303 312 Experimental evidence showing that stochastic subspace identication methods may fail 1 Anders Dahlen, Anders Lindquist, Jorge Mari Division of Optimization and

More information

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Variable Learning Rate LMS Based Linear Adaptive Inverse Control * ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of

More information

Research Article Efficient Multichannel NLMS Implementation for Acoustic Echo Cancellation

Research Article Efficient Multichannel NLMS Implementation for Acoustic Echo Cancellation Hindawi Publishing Corporation EURASIP Journal on Audio, Speech, and Music Processing Volume 27, Article ID 78439, 6 pages doi:1.1155/27/78439 Research Article Efficient Multichannel NLMS Implementation

More information

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR SPEECH CODING Petr Polla & Pavel Sova Czech Technical University of Prague CVUT FEL K, 66 7 Praha 6, Czech Republic E-mail: polla@noel.feld.cvut.cz Abstract

More information

On the Use of A Priori Knowledge in Adaptive Inverse Control

On the Use of A Priori Knowledge in Adaptive Inverse Control 54 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 47, NO 1, JANUARY 2000 On the Use of A Priori Knowledge in Adaptive Inverse Control August Kaelin, Member,

More information

Performance Comparison of a Second-order adaptive IIR Notch Filter based on Plain Gradient Algorithm

Performance Comparison of a Second-order adaptive IIR Notch Filter based on Plain Gradient Algorithm Performance Comparison of a Second-order adaptive IIR Notch Filter based on Plain Gradient Algorithm 41 Performance Comparison of a Second-order adaptive IIR Notch Filter based on Plain Gradient Algorithm

More information

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical

More information

Blind Deconvolution via Maximum Kurtosis Adaptive Filtering

Blind Deconvolution via Maximum Kurtosis Adaptive Filtering Blind Deconvolution via Maximum Kurtosis Adaptive Filtering Deborah Pereg Doron Benzvi The Jerusalem College of Engineering Jerusalem, Israel doronb@jce.ac.il, deborahpe@post.jce.ac.il ABSTRACT In this

More information

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on Adap>ve Filters Part 2 (LMS variants and analysis) Sta>s>cal Signal Processing II: Linear Es>ma>on Eric Wan, Ph.D. Fall 2015 1 LMS Variants and Analysis LMS variants Normalized LMS Leaky LMS Filtered-X

More information

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise MarcBodson Department of Electrical Engineering University of Utah Salt Lake City, UT 842, U.S.A. (8) 58 859 bodson@ee.utah.edu

More information

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 9 Observers,

More information

Stochastic error whitening algorithm for linear filter estimation with noisy data

Stochastic error whitening algorithm for linear filter estimation with noisy data Neural Networks 16 (2003) 873 880 2003 Special issue Stochastic error whitening algorithm for linear filter estimation with noisy data Yadunandana N. Rao*, Deniz Erdogmus, Geetha Y. Rao, Jose C. Principe

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Performance Analysis of Norm Constraint Least Mean Square Algorithm

Performance Analysis of Norm Constraint Least Mean Square Algorithm IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 5, MAY 2012 2223 Performance Analysis of Norm Constraint Least Mean Square Algorithm Guolong Su, Jian Jin, Yuantao Gu, Member, IEEE, and Jian Wang Abstract

More information

A Strict Stability Limit for Adaptive Gradient Type Algorithms

A Strict Stability Limit for Adaptive Gradient Type Algorithms c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms

More information

PDF hosted at the Radboud Repository of the Radboud University Nijmegen

PDF hosted at the Radboud Repository of the Radboud University Nijmegen PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a preprint version which may differ from the publisher's version. For additional information about this

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show

More information

Supervisory Control of Petri Nets with. Uncontrollable/Unobservable Transitions. John O. Moody and Panos J. Antsaklis

Supervisory Control of Petri Nets with. Uncontrollable/Unobservable Transitions. John O. Moody and Panos J. Antsaklis Supervisory Control of Petri Nets with Uncontrollable/Unobservable Transitions John O. Moody and Panos J. Antsaklis Department of Electrical Engineering University of Notre Dame, Notre Dame, IN 46556 USA

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

Adaptive Sparse System Identification Using Wavelets

Adaptive Sparse System Identification Using Wavelets 656 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 49, NO. 10, OCTOBER 2002 Adaptive Sparse System Identification Using Wavelets K. C. Ho, Senior Member, IEEE,

More information

Bobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, and Anupam Goyal III. ASSOCIATIVE RECALL BY A POLYNOMIAL MAPPING

Bobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, and Anupam Goyal III. ASSOCIATIVE RECALL BY A POLYNOMIAL MAPPING Synthesis of a Nonrecurrent Associative Memory Model Based on a Nonlinear Transformation in the Spectral Domain p. 1 Bobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, Anupam Goyal Abstract -

More information

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk

More information

ESE 531: Digital Signal Processing

ESE 531: Digital Signal Processing ESE 531: Digital Signal Processing Lec 22: April 10, 2018 Adaptive Filters Penn ESE 531 Spring 2018 Khanna Lecture Outline! Circular convolution as linear convolution with aliasing! Adaptive Filters Penn

More information

A FAST AND ACCURATE ADAPTIVE NOTCH FILTER USING A MONOTONICALLY INCREASING GRADIENT. Yosuke SUGIURA

A FAST AND ACCURATE ADAPTIVE NOTCH FILTER USING A MONOTONICALLY INCREASING GRADIENT. Yosuke SUGIURA A FAST AND ACCURATE ADAPTIVE NOTCH FILTER USING A MONOTONICALLY INCREASING GRADIENT Yosuke SUGIURA Tokyo University of Science Faculty of Science and Technology ABSTRACT In this paper, we propose a new

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts.

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts. Adaptive linear quadratic control using policy iteration Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke@cs.umass.edu B. Erik Ydstie Department of Chemical

More information

computation of the algorithms it is useful to introduce some sort of mapping that reduces the dimension of the data set before applying signal process

computation of the algorithms it is useful to introduce some sort of mapping that reduces the dimension of the data set before applying signal process Optimal Dimension Reduction for Array Processing { Generalized Soren Anderson y and Arye Nehorai Department of Electrical Engineering Yale University New Haven, CT 06520 EDICS Category: 3.6, 3.8. Abstract

More information

Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta

Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta SUMMARY We propose an algorithm to compute time and space variant prediction

More information

The DFT as Convolution or Filtering

The DFT as Convolution or Filtering Connexions module: m16328 1 The DFT as Convolution or Filtering C. Sidney Burrus This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License A major application

More information

Pade approximants and noise: rational functions

Pade approximants and noise: rational functions Journal of Computational and Applied Mathematics 105 (1999) 285 297 Pade approximants and noise: rational functions Jacek Gilewicz a; a; b;1, Maciej Pindor a Centre de Physique Theorique, Unite Propre

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1 GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS Mitsuru Kawamoto,2 and Yuiro Inouye. Dept. of Electronic and Control Systems Engineering, Shimane University,

More information