Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels
|
|
- Ashlyn Melton
- 5 years ago
- Views:
Transcription
1 Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA E.Mail : 1 bijitbijit@gmail.com, 2 mrityun@ece.iitkgp.ernet.in Abstract The Volterra series model, though a popular tool in modeling many practical nonlinear systems, suffers from the problem of over-parameterization, as too many coefficients need to be identified, requiring very long data records. On the other hand, often it is observed that of all the model coefficients, only a few are prominent while the others are relatively insignificant. The sparsity inherent in such systems is, however, not exploited by standard estimators which are based on minimization of some L2 norm like mean square error or sum of error squared. This paper draws inspiration from the domain of compressive sampling and proposes an adaptive algorithm for estimating sparse Volterra Kernels, by embedding a L1 norm penalty on the coefficients into the quadratic least mean squares (LMS) cost function. It is shown that the proposed algorithm can achieve a lower steadystate mean square error than that of a standard LMS based algorithm for identifying the Volterra model. Index terms : Volterra Series, L 1 norm, Sparse Systems, LMS adaptation II. PROBLEM FORMULATION AND ALGORITHM A. LMS Algorithm for Truncated Volterra Series Model The development of a gradient-type LMS adaptive algorithm for truncated Volterra series nonlinear models follows a similar method of development as for linear systems. The truncated p-th order Volterra series expansion is given as [1], y(n) = h 0 + m 1=0m 2=0...+ m 1=0 h 1 (m 1 )x(n m 1 )+ h 2 (m 1,m 2 )x(n m 1 )x(n m 2 )+... m 1=0m 2=0... m p=0 h p (m 1,m 2,.....m p )x(n m 1 )x(n m 2 )...x(n m p ). (1) I. INTRODUCTION Adaptive identification of nonlinear systems has found many applications in areas like control, communications, biological signal processing, image processing, etc. For systems with sufficiently smooth nonlinearity, the Volterra series [1] offers a well-appreciated model of the system, expressing the output as a polynomial expansion of the input. The number of terms in the Volterra series, however, increases exponentially as the model order increases and as a result, often in practice, a truncated model (upto 2nd order) is considered. The coefficients of such a model are then identified by an appropriate adaptive algorithm, e.g., the LMS algorithm [9]-[10].. In various applications, one, however, comes across sparse Volterra models that have several coefficients zero or negligible. Such a priori knowledge about the sparsity of the system, if embedded in the identification algorithm, can boost up the performance of the algorithm. However, except for [2], the sparsity has not so far been exploited in the identification of Volterra systems. In [2], new algorithms, both batch and recursive, have been developed, and the recursive algorithm, being a variant of the recursive least squares (RLS) [9]-[10], carries the demerit of huge computational burden. This motivates us to develop a LMS based alternative which exploits the sparse nature of the Volterra system model. Assuming h 0 = 0 and p = 2, the weight vector for the adaptive filter at the n-th index is given by, H(n) = {h 1 (0;n),h 1 (1;n),...,h 1 (N 1;n),h 2 (0,0;n),h 2 (0,1;n),..., h 2 (0,N 1;n),h 2 (1,1;n),...h 2 (N 1,N 1;n)} T (2) Similarly, the input vector at the n-th index is given as, X(n) = {x(n),x(n 1),...,x(n N +1),x 2 (n),x(n)x(n 1),...,x(n)x(n N +1) x 2 (n 1),...,x 2 (n N +1)} T. (3) Linear and quadratic coefficients are updated separately by minimizing the instantaneous square of the error where J(n) = e 2 (n) (4) e(n) = d(n) ˆd(n) (5) ˆd(n) is the estimate of d(n). This results in the following update equations : h 1 (m 1 ;n+1) = h 1 (m 1 ;n) µ e 2 (n) 2 h 1 (m 1 ;n) = h 1 (m 1 ;n)+µe(n)x(n m 1 ), (6) APSIPA. All rights reserved. Proceedings of the Second APSIPA Annual Summit and Conference, pages , Biopolis, Singapore, December 2010.
2 and, h 2 (m 1,m 2 ;n+1) = h 2 (m 1,m 2 ;n) µ e 2 (n) 2 h 2 (m 1,m 2 ;n) = h 2 (m 1,m 2 ;n) + µe(n)x(n m 1 )x(n m 2 ), (7) where µ is the so-called step-size, used to control the speed of convergence and ensure stability of the filter. Fig. 1. Second order Volterra series model with N=3. Using the weight vector notation, H(n), we can combine the two update equations into one as the coefficient update equation e(n) = d(n) H T (n) X(n) (8) H(n+1) = H(n)+µ X(n)e(n), (9) where the value of µ is chosen such that 0 < µ < 2 λ max, (10) withλ max denoting the maximum eigenvalue of the autocorrelation matrix of the input vector X(n). For nonlinear Volterra filters, the eigenvalue spread of the autocorrelation matrix of the input vector is quite large. This leads to slow convergence. Note that the symmetric property of the coefficients reduces the length of the coefficient vector by half. B. Sparse Nature of Volterra Kernels In many applications, the associated Volterra kernels are sparse, meaning that many of the entries of H(n) are zero. Consider for example the Linear-Nonlinear-Linear (LNL) model employed in various applications like modeling the effects of nonlinear amplifiers in OFDM, the satellite communication channel, or the transfer function of loudspeakers and headphones. The LNL model consists of a linear filter h a (k), k = 0,1,,L a 1, in cascade with a memoryless nonlinearity f(x), and a second linear filter h b (k), k = 0,1,,L b 1. The overall memory is thus L = L a +L b 1. If the nonlinear function is analytic on an open set (a, b), it accepts a Taylor series expansion : f(x) = c p x p, x p=0 (a, b). It can then be shown that the p-th order Volterra kernel is given by [1] L b 1 h p (k 1,k 2,...k p ) = c p k=0 h b (k)h a (k 1 k)...h a (k p k) In (11), there exist p-tuples (k 1, k 2,,k p ) for which there is no k {0,...,L b 1} such that (k i k) {0,L a 1} for all i = 1,,p. For these p-tuples, the Volterra kernel equals zero. Further, if the second filter in the LNL model is dropped, then one obtains the so-called Wiener model, for which the p-th order Volterra kernel is expressed as (11) h p (k 1,,k p ) = c p h a (k 1 ) h a (k p ). (12) Due to the separability of the kernel in (12), if the impulse responseh a (k) is also sparse, then the Volterra kernel becomes even sparser. Apart from these nonlinear systems with special structures, it has been observed that in many applications, only a few kernel coefficients contribute to the output [3]. Furthermore, the sparsity of the Volterra representation can also arise when the degree of the nonlinearity and the system memory are not known a priori. In this case, kernel estimation must be performed jointly with model order selection. Based on these considerations, exploiting the sparsity present in many Volterra representations is well motivated. C. A Sparsity-Aware Variant of LMS for Volterra Kernels Estimation In the proposed method, we employ L 1 norm regularization to exploit the a priori information that the Volterra model is over-parameterized and sparse. In (4), by combining the L 1 norm penalty of the coefficient vector with the instantaneous square error, a new cost function J 1 (n) can be defined as J 1 (n) = e 2 (n)+γ H(n) 1, (13) where. 1 indicates the L 1 norm of the vector considered. Using the gradient descent updating, the new filter update is then obtained as H(n+1) = H(n) µ J 1(n) H(n) = H(n)+µ X(n)e(n) ρsign( H(n))(14) where ρ = µγ and sign(.) is a component-wise sign function defined as { x sign(x) = x : x 0 (15) 0 : x = 0 351
3 Comparing with (9), (14) has the additional term ρsign( H(n)) which always attracts the tap coefficients towards zero. In other words, it exploits the sparse nature of the system model. This update equation is an extension of the ZA-LMS algorithm for linear sparse systems [4] to nonlinear over-parameterized Volterra kernels. Following steps analogous to [4], the mean coefficient vector E[ H(n)] can be shown to converge as E[ H( )] = H opt ρ µ R 1 E[sign H( )] (16) if µ satisfies (10). Similarly, the steady state excess mean square error in this case will be given as, where P ex ( ) = η 2 η P α (2 η)µ ρ(ρ 2α 2 ) (17) α 1 α 1 = E[sign( H( )) T ((I µr) 1 sign( H( ))] (18) with I denoting the identity matrix, R denoting the autocovariance matrix of the input, P 0 indicating the minimum mean square error, and η = Tr.(µR(I µr) 1 ), and, α 2 = E[ H( ) 1 ] H opt 1. (19) [Derivations of (16)-(19) are skipped in this paper and will be provided in the revised version of the manuscript.] For highly sparse systems, if ρ is properly selected between 0 and 2α2 α 1, lower MSE than obtainable under the standard LMS algorithm will be observed. III. SIMULATION STUDIES A. Linear-Nonlinear-Linear (LNL) Model Fig. 2. General nonlinear model (LNL) The proposed algorithm was simulated using matlab. First, a L-N-L model was constructed as shown in Fig. 2 above, having a linear FIR filter with impulse response h(n) = [ 0.9,0,0.87,0, 0.3,0.2,0,0] T, in cascade with the memoryless nonlinearity f(x) = 0.4x x, which is followed by the same linear filter h(n). This system is exactly described by a Volterra expansion with N = 15 and p = 2, leading to a total of 136 kernel coefficients stored in the vector H. Out of these, only a few kernel coefficients are nonzero. The system input was taken as a zero mean, unit variance white Gaussian process (i.e., N (0,1)) while the output was corrupted by additive white Gaussian noise with zero mean and variance of (i.e., N (0,0.001)), leading to a signal to noise ratio (SNR) of 30dB. Fig. 3 shows the learning curves by plotting the observed mean-square error (MSE), averaged over 3000 experiments, against the iteration index n for the following Fig. 4. The Wiener nonlinear model cases : (i) the standard LMS algorithm, as given by (9), with µ = [the blue curve], and, (ii) the proposed sparse LMS algorithm, given by (14) with, µ = and ρ = [the red curve]. While the convergence rates are almost identical for the two cases, as is to be expected since the same value of µ is used for both, quite clearly, the proposed sparse LMS algorithm has a lower steady-state mean square error as compared to the standard LMS case. B. Wiener Model Next we considered a Wiener model, which is a cascade of a linear filter and a memoryless nonlinearity, as shown in Fig. 4. For our simulation, the linear filter chosen had the impulse response, h(n) = = [ 0.9,0,0.87,0, 0.3,0.2,0,0,0,0,0,0,0.514, 0.95, 0.12] T and the memoryless nonlinearity was given by f(x) = 0.4x x. This system too is exactly described by a Volterra expansion with N = 15 and p = 2, leading to a total of 136 kernel coefficients, out of which only a few are nonzero. As before, the system input was taken as a zero mean, unit variance white Gaussian process and the output noise was taken to be zero mean, white Gaussian with variance , resulting in a SNR of 30 db. The corresponding learning curves are shown in Fig. 5, both for the standard LMS (the blue curve) with µ = and the proposed sparse LMS (the red curve) with µ = and ρ = , by averaging the MSE curves over 3000 experiments. The sparse LMS shows a lower steady-state mean square error. Quite clearly, under the same convergence rate condition, the proposed algorithm exhibits considerably lesser steady-state mean square error as compared to the standard LMS algorithm. IV. CONCLUSIONS An algorithm is presented for adaptive identification of nonlinear systems given by sparse, truncated Volterra kernels. The algorithm introduces a L 1 norm penalty function of the filter coefficients in the instantaneous square error and derives a LMS like algorithm that forces the insignificant coefficients to converge to zero faster. Simulation results showing superiority of the proposed method over LMS are provided. 352
4 0 Sparsity aware LMS Standard LMS 10 Mean Square Error (M.S.E.) Iteration Index (n) Fig. 3. The MSE versus no. of observations curve for General nonlinear model (LNL) 0 Sparsity aware LMS Standard LMS 10 Mean Square Error (M.S.E.) Iteration Index (n) Fig. 5. The MSE versus no. of observations curve for nonlinear Wiener model 353
5 REFERENCES [1] V. Mathews and G. Sicuranza,Polynomial Signal Processing, John Wiley and Sons Inc., [2] Vassilis Kekatos, Daniele Angelosante, Georgios B. Giannakis, Sparsity- Aware Estimation of Nonlinear Volterra Kernels, in 3 rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing [CAMSAP], Aruba, Dutch Antilles, [3] S. Benedetto and E. Biglieri, Nonlinear equalization of digital satellite channels, in IEEE J. Select. Areas Commun., no. 1, pp.57-62, Jan [4] Y. Gu Y. Chen and A. O. Hero, Sparse LMS for system identification, in Proc. IEEE Intl. Conf. Acoust. Sp. Sig. Proc., Taipei, Taiwan, Apr [5] Tokunbo Ogunfunmi, Adaptive Nonlinear System Identification: The Volterra and Wiener Model Approaches, Springer, [6] R. Tibshirani, Regression shrinkage and selection via the lasso, in J. Royal. Statist. Soc B., vol. 58, pp , [7] E. Cand es, Compressive Sampling in Int. Congress of Mathematics, vol. 3, pp , [8] R. Baraniuk, Compressive sensing, in IEEE Signal Processing Magazine, vol. 25, pp. 2130, March [9] S. Haykin, Adaptive Filter Theory, 3 rd, Prentice Hall. [10] B. Farhang-Boroujeny, Adaptive Filters, John Wlley and Sons. [11] D. G. Manolakis, V. K. Ingle, S. M. Kogon, Statistical and Adaptive Signal Processing, McGraw-HILL [12] Daniele Angelosante, Juan Andres Bazerque, Georgios B. Giannakis, Online Adaptive Estimation of Sparse Signals: where RLS meets the l 1 -norm in IEEE Transactions on Signal Processing (To appear) [13] A. H. Sayed, Fundamentals of Adaptive Filtering, John Wiley and Sons,
Adaptive Filter Theory
0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent
More information2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms
2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior
More informationMITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS
17th European Signal Processing Conference (EUSIPCO 29) Glasgow, Scotland, August 24-28, 29 MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS Muhammad Tahir AKHTAR
More informationAdaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems
Wireless Signal Processing & Networking Workshop: Emerging Wireless Technologies, Sendai, Japan, 28 Oct. 2013. Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications
More informationLEAST Mean Squares Algorithm (LMS), introduced by
1 Hard Threshold Least Mean Squares Algorithm Lampros Flokas and Petros Maragos arxiv:1608.0118v1 [cs.sy] 3 Aug 016 Abstract This work presents a new variation of the commonly used Least Mean Squares Algorithm
More informationsquares based sparse system identification for the error in variables
Lim and Pang SpringerPlus 2016)5:1460 DOI 10.1186/s40064-016-3120-6 RESEARCH Open Access l 1 regularized recursive total least squares based sparse system identification for the error in variables Jun
More informationDominant Pole Localization of FxLMS Adaptation Process in Active Noise Control
APSIPA ASC 20 Xi an Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control Iman Tabatabaei Ardekani, Waleed H. Abdulla The University of Auckland, Private Bag 9209, Auckland, New
More informationAlternating Optimization with Shrinkage
Sparsity-Aware Adaptive Algorithms Based on 1 Alternating Optimization with Shrinkage Rodrigo C. de Lamare and Raimundo Sampaio-Neto arxiv:1401.0463v1 [cs.sy] 2 Jan 2014 Abstract This letter proposes a
More informationThe Modeling and Equalization Technique of Nonlinear Wireless Channel
Send Orders for Reprints to reprints@benthamscience.ae The Open Cybernetics & Systemics Journal, 4, 8, 97-3 97 Open Access The Modeling and Equalization Technique of Nonlinear Wireless Channel Qiu Min
More informationAdaptive Filtering Part II
Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,
More informationSparseness-Controlled Affine Projection Algorithm for Echo Cancelation
Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation ei iao and Andy W. H. Khong E-mail: liao38@e.ntu.edu.sg E-mail: andykhong@ntu.edu.sg Nanyang Technological University, Singapore Abstract
More informationNONLINEAR ECHO CANCELLATION FOR HANDS-FREE SPEAKERPHONES. Bryan S. Nollett and Douglas L. Jones
NONLINEAR ECHO CANCELLATION FOR HANDS-FREE SPEAKERPHONES Bryan S. Nollett and Douglas L. Jones Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1308 W. Main St. Urbana, IL 61801
More informationPower Amplifier Linearization Using Multi- Stage Digital Predistortion Based On Indirect Learning Architecture
Power Amplifier Linearization Using Multi- Stage Digital Predistortion Based On Indirect Learning Architecture Sreenath S 1, Bibin Jose 2, Dr. G Ramachandra Reddy 3 Student, SENSE, VIT University, Vellore,
More informationChapter 2 Fundamentals of Adaptive Filter Theory
Chapter 2 Fundamentals of Adaptive Filter Theory In this chapter we will treat some fundamentals of the adaptive filtering theory highlighting the system identification problem We will introduce a signal
More informationIntegrated Direct Sub-band Adaptive Volterra Filter and Its Application to Identification of Loudspeaker Nonlinearity
Integrated Direct Sub-band Adaptive Volterra Filter and Its Application to Identification of Loudspeaker Nonlinearity Satoshi Kinoshita Department of Electrical and Electronic Engineering, Kansai University
More informationNONLINEAR systems with memory appear frequently in
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 12, DECEMBER 2011 5907 Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation Vassilis Kekatos, Member, IEEE, and Georgios B
More informationEfficient Use Of Sparse Adaptive Filters
Efficient Use Of Sparse Adaptive Filters Andy W.H. Khong and Patrick A. Naylor Department of Electrical and Electronic Engineering, Imperial College ondon Email: {andy.khong, p.naylor}@imperial.ac.uk Abstract
More informationA Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model
A Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model The MIT Faculty has made this article openly available Please share how this access benefits you Your story matters Citation
More informationA SPARSENESS CONTROLLED PROPORTIONATE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION
6th European Signal Processing Conference (EUSIPCO 28), Lausanne, Switzerland, August 25-29, 28, copyright by EURASIP A SPARSENESS CONTROLLED PROPORTIONATE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION Pradeep
More informationAn Adaptive Sensor Array Using an Affine Combination of Two Filters
An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia
More informationACTIVE noise control (ANC) ([1], [2]) is an established
286 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005 Convergence Analysis of a Complex LMS Algorithm With Tonal Reference Signals Mrityunjoy Chakraborty, Senior Member, IEEE,
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationIS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?
IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl
More informationPerformance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112
Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS
More informationExploiting Sparsity for Wireless Communications
Exploiting Sparsity for Wireless Communications Georgios B. Giannakis Dept. of ECE, Univ. of Minnesota http://spincom.ece.umn.edu Acknowledgements: D. Angelosante, J.-A. Bazerque, H. Zhu; and NSF grants
More informationNONLINEAR SYSTEMS IDENTIFICATION USING THE VOLTERRA MODEL. Georgeta Budura
NONLINEAR SYSTEMS IDENTIFICATION USING THE VOLTERRA MODEL Georgeta Budura Politenica University of Timisoara, Faculty of Electronics and Telecommunications, Comm. Dep., georgeta.budura@etc.utt.ro Abstract:
More informationReduced-cost combination of adaptive filters for acoustic echo cancellation
Reduced-cost combination of adaptive filters for acoustic echo cancellation Luis A. Azpicueta-Ruiz and Jerónimo Arenas-García Dept. Signal Theory and Communications, Universidad Carlos III de Madrid Leganés,
More informationChapter 2 Wiener Filtering
Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start
More informationAdaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling
Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical
More informationPerformance Analysis of Norm Constraint Least Mean Square Algorithm
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 5, MAY 2012 2223 Performance Analysis of Norm Constraint Least Mean Square Algorithm Guolong Su, Jian Jin, Yuantao Gu, Member, IEEE, and Jian Wang Abstract
More informationAdaptive MMSE Equalizer with Optimum Tap-length and Decision Delay
Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk
More informationA low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao
ISSN: 77-3754 International Journal of Engineering and Innovative echnology (IJEI Volume 1, Issue, February 1 A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation
More informationCO-OPERATION among multiple cognitive radio (CR)
586 IEEE SIGNAL PROCESSING LETTERS, VOL 21, NO 5, MAY 2014 Sparse Bayesian Hierarchical Prior Modeling Based Cooperative Spectrum Sensing in Wideb Cognitive Radio Networks Feng Li Zongben Xu Abstract This
More informationEFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander
EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North
More information26. Filtering. ECE 830, Spring 2014
26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem
More informationSparse Volterra and Polynomial Regression Models: Recoverability and Estimation
IEEE TRANSACTIONS ON SIGNAL PROCESSING REVISED) 1 Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation Vassilis Kekatos, Member, IEEE, and Georgios B. Giannakis*, Fellow, IEEE
More informationIII.C - Linear Transformations: Optimal Filtering
1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients
More informationSTEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM.
STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM Badong Chen 1, Zhengda Qin 1, Lei Sun 2 1 Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University,
More informationError Entropy Criterion in Echo State Network Training
Error Entropy Criterion in Echo State Network Training Levy Boccato 1, Daniel G. Silva 1, Denis Fantinato 1, Kenji Nose Filho 1, Rafael Ferrari 1, Romis Attux 1, Aline Neves 2, Jugurta Montalvão 3 and
More informationAdaptiveFilters. GJRE-F Classification : FOR Code:
Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals
More informationImage Compression using DPCM with LMS Algorithm
Image Compression using DPCM with LMS Algorithm Reenu Sharma, Abhay Khedkar SRCEM, Banmore -----------------------------------------------------------------****---------------------------------------------------------------
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationDESIGN OF QUANTIZED FIR FILTER USING COMPENSATING ZEROS
DESIGN OF QUANTIZED FIR FILTER USING COMPENSATING ZEROS Nivedita Yadav, O.P. Singh, Ashish Dixit Department of Electronics and Communication Engineering, Amity University, Lucknow Campus, Lucknow, (India)
More informationOn the Stability of the Least-Mean Fourth (LMF) Algorithm
XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We
More informationAdaptive Systems Homework Assignment 1
Signal Processing and Speech Communication Lab. Graz University of Technology Adaptive Systems Homework Assignment 1 Name(s) Matr.No(s). The analytical part of your homework (your calculation sheets) as
More informationSamira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon
Echo Cancelation Using Least Mean Square (LMS) Algorithm Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon Abstract The aim of this work is to investigate methods for
More informationCh5: Least Mean-Square Adaptive Filtering
Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm
More informationBlind Source Separation with a Time-Varying Mixing Matrix
Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,
More informationSteady-state performance analysis of a variable tap-length LMS algorithm
Loughborough University Institutional Repository Steady-state performance analysis of a variable tap-length LMS algorithm This item was submitted to Loughborough University's Institutional Repository by
More informationESE 531: Digital Signal Processing
ESE 531: Digital Signal Processing Lec 22: April 10, 2018 Adaptive Filters Penn ESE 531 Spring 2018 Khanna Lecture Outline! Circular convolution as linear convolution with aliasing! Adaptive Filters Penn
More informationCooperative Communication with Feedback via Stochastic Approximation
Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu
More informationBinary Step Size Variations of LMS and NLMS
IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume, Issue 4 (May. Jun. 013), PP 07-13 e-issn: 319 400, p-issn No. : 319 4197 Binary Step Size Variations of LMS and NLMS C Mohan Rao 1, Dr. B
More informationBlind Channel Equalization in Impulse Noise
Blind Channel Equalization in Impulse Noise Rubaiyat Yasmin and Tetsuya Shimamura Graduate School of Science and Engineering, Saitama University 255 Shimo-okubo, Sakura-ku, Saitama 338-8570, Japan yasmin@sie.ics.saitama-u.ac.jp
More informationPerformance Analysis and Enhancements of Adaptive Algorithms and Their Applications
Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications SHENGKUI ZHAO School of Computer Engineering A thesis submitted to the Nanyang Technological University in partial fulfillment
More informationKNOWN approaches for improving the performance of
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas
More informationNSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters
NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters R. K. Martin and C. R. Johnson, Jr. School of Electrical Engineering Cornell University Ithaca, NY 14853 {frodo,johnson}@ece.cornell.edu
More informationMMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm
MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm D.R. Brown III WPI WPI D.R. Brown III 1 / 19 Problem Statement and Assumptions known input x[n] unknown system (assumed
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationSign Function Based Sparse Adaptive Filtering Algorithms for Robust Channel Estimation under Non-Gaussian Noise Environments
algorithms Article Sign Function Based Sparse Adaptive Filtering Algorithms for Robust Channel Estimation under Non-Gaussian Noise Environments Tingping Zhang 1,2, * and Guan Gui 3 1 School of Information
More informationA DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY. Jie Yang
Adaptive Filter Design for Sparse Signal Estimation A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Jie Yang IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
More informationSliding Window Recursive Quadratic Optimization with Variable Regularization
11 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 1, 11 Sliding Window Recursive Quadratic Optimization with Variable Regularization Jesse B. Hoagg, Asad A. Ali,
More informationOptimal and Adaptive Filtering
Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69 Table of Contents 1
More informationA Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization
A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.
More informationSystem Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain
System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain Electrical Engineering Department Technion - Israel Institute of Technology Supervised by: Prof. Israel Cohen Outline
More informationComparative Performance Analysis of Three Algorithms for Principal Component Analysis
84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.
More informationA Robust Zero-point Attraction LMS Algorithm on Near Sparse System Identification
A Robust Zero-point Attraction LMS Algorithm on Near Sparse System Identification Jian Jin, Qing Qu, and Yuantao Gu Received Feb. 2012; accepted Feb. 2013. This article will appear in IET Signal Processing.
More informationDNNs for Sparse Coding and Dictionary Learning
DNNs for Sparse Coding and Dictionary Learning Subhadip Mukherjee, Debabrata Mahapatra, and Chandra Sekhar Seelamantula Department of Electrical Engineering, Indian Institute of Science, Bangalore 5612,
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationSystem Identification in the Short-Time Fourier Transform Domain
System Identification in the Short-Time Fourier Transform Domain Electrical Engineering Department Technion - Israel Institute of Technology Supervised by: Prof. Israel Cohen Outline Representation Identification
More informationRecursive l 1, Group lasso
Recursive l, Group lasso Yilun Chen, Student Member, IEEE, Alfred O. Hero, III, Fellow, IEEE arxiv:.5734v [stat.me] 29 Jan 2 Abstract We introduce a recursive adaptive group lasso algorithm for real-time
More informationRecursive Least Squares for an Entropy Regularized MSE Cost Function
Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University
More informationMinimax MMSE Estimator for Sparse System
Proceedings of the World Congress on Engineering and Computer Science 22 Vol I WCE 22, October 24-26, 22, San Francisco, USA Minimax MMSE Estimator for Sparse System Hongqing Liu, Mandar Chitre Abstract
More informationLinear Models for Regression
Linear Models for Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationToday. ESE 531: Digital Signal Processing. IIR Filter Design. Impulse Invariance. Impulse Invariance. Impulse Invariance. ω < π.
Today ESE 53: Digital Signal Processing! IIR Filter Design " Lec 8: March 30, 207 IIR Filters and Adaptive Filters " Bilinear Transformation! Transformation of DT Filters! Adaptive Filters! LMS Algorithm
More informationStatistical and Adaptive Signal Processing
r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory
More informationCh4: Method of Steepest Descent
Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number
More informationSystem Identification in the Short-Time Fourier Transform Domain
System Identification in the Short-Time Fourier Transform Domain Yekutiel Avargel System Identification in the Short-Time Fourier Transform Domain Research Thesis As Partial Fulfillment of the Requirements
More informationError Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification
American J. of Engineering and Applied Sciences 3 (4): 710-717, 010 ISSN 1941-700 010 Science Publications Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification
More informationA Flexible ICA-Based Method for AEC Without Requiring Double-Talk Detection
APSIPA ASC 2011 Xi an A Flexible ICA-Based Method for AEC Without Requiring Double-Talk Detection Marko Kanadi, Muhammad Tahir Akhtar, Wataru Mitsuhashi Department of Information and Communication Engineering,
More informationAssesment of the efficiency of the LMS algorithm based on spectral information
Assesment of the efficiency of the algorithm based on spectral information (Invited Paper) Aaron Flores and Bernard Widrow ISL, Department of Electrical Engineering, Stanford University, Stanford CA, USA
More informationNew Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 135 New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks Martin Bouchard,
More informationVarious Nonlinear Models and their Identification, Equalization and Linearization
Various Nonlinear Models and their Identification, Equalization and Linearization A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Technology in Telematics and
More informationRiccati difference equations to non linear extended Kalman filter constraints
International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R
More informationSTOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang
ICIC Express Letters ICIC International c 2009 ISSN 1881-803X Volume 3, Number 3, September 2009 pp. 1 6 STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION Badong Chen,
More informationLMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.
Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller
More informationOld painting digital color restoration
Old painting digital color restoration Michail Pappas Ioannis Pitas Dept. of Informatics, Aristotle University of Thessaloniki GR-54643 Thessaloniki, Greece Abstract Many old paintings suffer from the
More informationVSS-LMS Algorithms for Multichannel System Identification Using Volterra Filtering Sandipta Dutta Gupta 1, A.K. Kohli 2
VSS-LMS Algorithms for Multichannel System Identification Using Volterra Filtering Sipta Dutta Gupta 1, A.K. Kohli 2 1,2 (Electronics Communication Engineering Department, Thapar University, India) ABSTRACT:
More information3.4 Linear Least-Squares Filter
X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum
More informationarxiv: v1 [cs.sd] 28 Feb 2017
Nonlinear Volterra Model of a Loudspeaker Behavior Based on Laser Doppler Vibrometry Alessandro Loriga, Parvin Moyassari, and Daniele Bernardini Intranet Standard GmbH, Ottostrasse 3, 80333 Munich, Germany
More informationConvergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization
Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization 1 Shihab Jimaa Khalifa University of Science, Technology and Research (KUSTAR) Faculty
More informationIndependent Component Analysis. Contents
Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle
More informationACCORDING to Shannon s sampling theorem, an analog
554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,
More informationA new structure for nonlinear narrowband active noise control using Volterra filter
A new structure for nonlinear narrowband active noise control using Volterra filter Jian LIU 1 ; Yegui XIAO 2 ; Hui CHEN 1 ; Wenbo LIU 1 1 Nanjing University of Aeronautics and Astronautics, Nanjing, China
More informationKochi University of Technology Aca Developments of Adaptive Filter A Title parse Channel Estimation Author(s) LI, Yingsong Citation 高知工科大学, 博士論文. Date of 2014-03 issue URL http://hdl.handle.net/10173/1119
More informationResearch Overview. Kristjan Greenewald. February 2, University of Michigan - Ann Arbor
Research Overview Kristjan Greenewald University of Michigan - Ann Arbor February 2, 2016 2/17 Background and Motivation Want efficient statistical modeling of high-dimensional spatio-temporal data with
More informationELEG 833. Nonlinear Signal Processing
Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu February 15, 2005 1 INTRODUCTION 1 Introduction Signal processing
More informationTitle without the persistently exciting c. works must be obtained from the IEE
Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544
More informationOn the Use of A Priori Knowledge in Adaptive Inverse Control
54 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 47, NO 1, JANUARY 2000 On the Use of A Priori Knowledge in Adaptive Inverse Control August Kaelin, Member,
More informationDepartment of Electrical and Electronic Engineering
Imperial College London Department of Electrical and Electronic Engineering Final Year Project Report 27 Project Title: Student: Course: Adaptive Echo Cancellation Pradeep Loganathan ISE4 Project Supervisor:
More information