Reduced-cost combination of adaptive filters for acoustic echo cancellation

Similar documents
An Adaptive Sensor Array Using an Affine Combination of Two Filters

Improved least-squares-based combiners for diffusion networks

Combinations of Adaptive Filters

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

Convex Combination of MIMO Filters for Multichannel Acoustic Echo Cancellation

A SPARSENESS CONTROLLED PROPORTIONATE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS

A SPARSENESS CONTROLLED PROPORTIONATE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

VARIABLE step-size adaptive filters allow the filters to dynamically

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification

Efficient Use Of Sparse Adaptive Filters

On the Stability of the Least-Mean Fourth (LMF) Algorithm

Adaptive Filtering Part II

A FAST AND ACCURATE ADAPTIVE NOTCH FILTER USING A MONOTONICALLY INCREASING GRADIENT. Yosuke SUGIURA

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Chapter 2 Fundamentals of Adaptive Filter Theory

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

ADAPTIVE COMBINATION OF SECOND ORDER VOLTERRA FILTERS WITH NLMS AND SIGN-NLMS ALGORITHMS FOR NONLINEAR ACOUSTIC ECHO CANCELLATION

A Flexible ICA-Based Method for AEC Without Requiring Double-Talk Detection

System Identification in the Short-Time Fourier Transform Domain

TRANSIENT PERFORMANCE OF AN INCREMENTAL COMBINATION OF LMS FILTERS. Luiz F. O. Chamon and Cássio G. Lopes

Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon

System Identification in the Short-Time Fourier Transform Domain

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

Alpha-Stable Distributions in Signal Processing of Audio Signals

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

Integrated Direct Sub-band Adaptive Volterra Filter and Its Application to Identification of Loudspeaker Nonlinearity

Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation

FAST IMPLEMENTATION OF A SUBBAND ADAPTIVE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

Blind Source Separation with a Time-Varying Mixing Matrix

Department of Electrical and Electronic Engineering

NONLINEAR ECHO CANCELLATION FOR HANDS-FREE SPEAKERPHONES. Bryan S. Nollett and Douglas L. Jones

Bayesian Regularization and Nonnegative Deconvolution for Room Impulse Response Estimation

NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters

Adaptive Stereo Acoustic Echo Cancelation in reverberant environments. Amos Schreibman

Title algorithm for active control of mul IEEE TRANSACTIONS ON AUDIO SPEECH A LANGUAGE PROCESSING (2006), 14(1):

Error Entropy Criterion in Echo State Network Training

USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo

IN real-world adaptive filtering applications, severe impairments

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels

REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION. Scott Rickard, Radu Balan, Justinian Rosca. Siemens Corporate Research Princeton, NJ 08540

Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control

DESIGNING RBF CLASSIFIERS FOR WEIGHTED BOOSTING

NONLINEAR DISTORTION REDUCTION FOR ELECTRODYNAMIC LOUDSPEAKER USING NONLINEAR FILTERING. Kenta IWAI, Yoshinobu KAJIKAWA

Adaptive Filter Theory

19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011

Steady-state performance analysis of a variable tap-length LMS algorithm

AdaptiveFilters. GJRE-F Classification : FOR Code:

Functional Link Adaptive Filters for Nonlinear Acoustic Echo Cancellation

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

Acoustic MIMO Signal Processing

A new method for a nonlinear acoustic echo cancellation system

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

Nonlinear Echo Cancellation using Generalized Power Filters

Performance analysis and design of FxLMS algorithm in broadband ANC system with online secondary-path modeling

Ch6-Normalized Least Mean-Square Adaptive Filtering

Title without the persistently exciting c. works must be obtained from the IEE

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT

Mackey-Glass Time-Series Prediction: Comparing the Performance of LMS, KLMS and NLMS-FL

A new structure for nonlinear narrowband active noise control using Volterra filter

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS

HARMONIC EXTENSION OF AN ADAPTIVE NOTCH FILTER FOR FREQUENCY TRACKING

Optimal Speech Enhancement Under Signal Presence Uncertainty Using Log-Spectral Amplitude Estimator

GAUSSIANIZATION METHOD FOR IDENTIFICATION OF MEMORYLESS NONLINEAR AUDIO SYSTEMS

A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY. Jie Yang

ADAPTIVE FILTER THEORY

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

COMPARISON OF MULTIPLE-MICROPHONE AND SINGLE-LOUDSPEAKER ADAPTIVE FEEDBACK/ECHO CANCELLATION SYSTEMS

RLS filter approach for linear distributed canning industry optimization

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

ADAPTIVE FILTER THEORY

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

A Low-Distortion Noise Canceller and Its Learning Algorithm in Presence of Crosstalk

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

Blind Channel Equalization in Impulse Noise

Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems

Filtered-X Least Mean Fourth (FXLMF) and Leaky FXLMF adaptive algorithms

STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM.

JOINT DEREVERBERATION AND NOISE REDUCTION BASED ON ACOUSTIC MULTICHANNEL EQUALIZATION. Ina Kodrasi, Simon Doclo

Adaptive Sparse System Identification Using Wavelets

APPROXIMATION OF A NONLINEAR DISTORTION FUNCTION FOR COMBINED LINEAR AND NONLINEAR RESIDUAL ECHO SUPPRESSION

Modifying Voice Activity Detection in Low SNR by correction factors

A Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Generalized CMAC Adaptive Ensembles for Concept-Drifting Data Streams

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases

EE482: Digital Signal Processing Applications

ADAPTIVE signal processing algorithms (ASPA s) are

On Parallel-Incremental Combinations of LMS Filters that Outperform the Affine Projection Algorithm

An overview on optimized NLMS algorithms for acoustic echo cancellation

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

Analysis of incremental RLS adaptive networks with noisy links

Convolutive Transfer Function Generalized Sidelobe Canceler

Hybrid Time-Frequency Domain Adaptive Filtering Algorithm for control of the vibration control systems

Kalman filtering for drift correction in IR detectors

Transcription:

Reduced-cost combination of adaptive filters for acoustic echo cancellation Luis A. Azpicueta-Ruiz and Jerónimo Arenas-García Dept. Signal Theory and Communications, Universidad Carlos III de Madrid Leganés, Madrid, 28911, Spain Email: {azpicueta,jarenas}@tsc.uc3m.es Vítor H. Nascimento and Magno T. M. Silva Dept. Electronic Systems Engineering, Escola Politécnica, Universidade de São Paulo, São Paulo, 558-1, Brazil Email: {vitor, magno}@lps.usp.br Abstract Adaptive combination of adaptive filters is an attractive approach that improves the performance attainable with individual filters. However, the computational cost associated with the operation of several filters in parallel could be unaffordable for some applications. In this paper, we propose a scheme with reduced computational cost for a combination of two adaptive filters with different step sizes, working in scenarios with a low signal-to-noise ratio and where the energy of the unknown system is concentrated at short delays. For this case, we propose to reduce the length of the fast filter, speeding up the convergence of the combination at the cost of a slight degradation on the steady-state error. We study and evaluate the performance of the combined scheme considering the length of the fast filter as a parameter. In addition, we discuss a possible application of this scheme in real-world acoustic echo cancellation for hands-free systems in cars. Fig. 1. Block diagram of a basic acoustic echo canceller. I. INTRODUCTION Adaptive filters (AFs) have become a crucial component in many signal processing applications such as acoustic signal processing, communications, biomedical signal processing, prediction of time series, among others [1], [2]. Focusing on acoustic echo cancellation, depicted in Fig. 1, an AF is usually employed to cancel the acoustic echo received by the microphone in order to avoid that the far-end user perceives a delayed version of his/her own voice. The acoustic echo canceller generates a copy y(n) as faithful as possible of the signal y h (n), employing both the input signal vector u(n) and an estimate w(n) of the room impulse response (RIR) h. If the cancellation is perfect, the error e(n) = d(n) y(n) would be equal to the background noise at microphone location e (n), where d(n) = y h (n)+e (n) = h T u(n)+e (n) is the desired signal. Although AFs are considered as a powerful tool in many signal processing applications, their effective performance highly depends on the specific tuning of several parameters, giving rise to different trade-offs. For instance, selection of the step size in least-mean-square (LMS) filters [or forgetting factor in recursive least-squares (RLS) algorithms] imposes a well-known compromise regarding speed of convergence, residual steady-state error, and tracking capabilities. In order to alleviate these trade-offs, a novel approach denominated as adaptive combination of adaptive filters was recently pre- Fig. 2. Block diagram of the adaptive combination of two adaptive filters. sented. This scheme, depicted in Fig. 2 for the case of two AFs, is based on the combination of several independent AFs that differ in the specific tuning of some of their parameters [3]. The combination adaptively mixes the individual outputs of the constituent filters by means of one (or several) mixing parameter(s). If the combination performs correctly, it behaves at least as well as the best component filter. The combination of filters approach has been successfully employed in different applications such as acoustic echo cancellation [4], beamforming [5], blind equalization [6], adaptive diffusion networks [7], among others. However, the standard combination of filters (Fig. 2) presents a disadvantage that could limit its utilization in some real-world applications: The computational cost of the standard combination of filters is

about N times that of a single AF, being N the number of constituents filters 1. In this paper, we focus on the reduction of the computational cost of a standard combination of AFs, similar to that proposed in [3], where two AFs with different step sizes (µ 1 and µ 2 = µ 1 /1) are combined to alleviate the compromise imposed by the selection of the step-size parameter. The combined scheme presents the fast convergence of the filter with µ 1 and the lower steady-state error of the slower component. Several proposals have paid attention to the reduction of the computational cost of the combination of two filters working in parallel. For instance, in [8], it was proposed a combination where the fast AF only updates the coefficients with highest energy. However, this scheme requires an additional operation to select the taps to be updated at each iteration. Other algorithms include mechanisms to freeze the adaptation when a component filter has converged, needing, again, additional operations to determine when an AF has reached the steady state [9]. In [1], a combination of AFs with different resolutions was proposed, permitting savings in terms of computational cost with respect to the standard combination. In this paper, we study the reduction of the computational cost of a combination of filters seeking not to include additional operations, but taking advantage of the characteristics of the filtering scenario. To this end, we evaluate the influence of the length of the fast filter in the performance of the combination. As it will be shown, when the unknown plant h presents an exponentially decreasing energy distribution and the signal-to-noise ratio (SNR) of the scenario is low, a fast component with shorter length than that of h would not affect the steady-state error of the overall combination, while reducing significantly the total computational cost. The rest of paper is organized as follows. Section 2 describes a combination of two normalized LMS (NLMS) filters focusing on the effect of the length of the fast component. Section 3 provides empirical evidence about the influence of the filter length considering different SNRs. The paper finishes with the conclusions of our work and some lines for future research, including a discussion about a real scenario where this scheme can be successfully employed. II. ADAPTIVE COMBINATION OF TWO NLMS FILTERS WITH DIFFERENT STEP SIZES AND LENGTHS In order to design a convex combination of two AFs that behaves robustly with respect to the selection of the step size, and with reduced computational cost, we propose to combine a fast AF w 1 (n) with step size µ 1 and length M 1, and a slow AF w 2 (n) with step size µ 2 = µ 1 /1 and length M 2 = M, where M is the length of the unknown plant h. In case of M 1 = M 2 = M, the proposed scheme becomes the standard combination. However, we aim to reduce the computational cost of the combination, trying to employ a shorter fast component, i.e., M 1 < M 2. 1 If the length of each component filter is large enough, the cost of updating the mixing parameters and combining the component outputs is negligible. A. Combination of two NLMS with different step sizes Each component filter adapts independently in order to minimize the power of its own error signal e i (n) = d(n) y i (n) = d(n) w T i (n 1)u i(n), i = 1,2, where y i (n) is the output of each AF and u i (n) is the input vector of each component 2. To this end, each constituent filter implements an NLMS algorithm with equation [1], [2] w i (n) = w i (n 1) + µ i δ + u i (n) 2 e i(n)u i (n), (1) where δ is a small positive constant to prevent division by very small values of the input energy u i (n) 2. We adaptively mix the outputs y 1 (n) and y 2 (n) by means of a convex combination y c (n) = λ(n)y 1 (n) + [1 λ(n)]y 2 (n), (2) where λ(n) [,1] is a mixing parameter adapted pursuing the minimization of the power of the combined error e c (n) = λ(n)e 1 (n) + [1 λ(n)]e 2 (n). However, instead of directly updating λ(n), we adapt an auxiliary mixing parameter a(n), univocally related to λ(n) by means of the sigmoid function λ(n) = sgm{a(n)} = [1 + e a(n) ] 1. Further details about the relation between a(n) and λ(n) can be found, e.g., in [3]. Adaptation of a(n) is carried out following a stochastic gradient descent algorithm to minimize e 2 c(n), giving rise to a(n) = a(n 1) + µ a p(n) e c(n)[e 2 (n) e 1 (n)] a(n) λ(n), (3) where µ a is a step size that governs the update of a(n) and p(n) = βp(n 1) + (1 β)[e 2 (n) e 1 (n)] 2 with << β < 1 is a low-pass filtered estimate of the power of signal [e 2 (n) e 1 (n)] used to normalize the update of a(n) making easier the selection of µ a [11]. B. Reducing the length of w 1 (n) A possible strategy to diminish the computational cost of the combination of filters is the reduction of the length of one (or both) of the component filters. Regarding the properties of an NLMS filter with respect to its length, it is known that: The larger the length of the AF, the slower the speed of convergence [1]. The steady-state error is constant as long as M 1 M. This fact is a consequence of the normalization in (1). However, in case of undermodeling, i.e. M 1 < M, the steady-state error increases since there are some coefficients of the unknown plant that are not identified, being this increment directly related to the total energy of such non-identified coefficients. In the literature, there exist different theoretical models that allow to predict the excess mean-square error EMSE i (n) = E{[e i (n) e (n)] 2 }, where E{ } means expected value. For instance, in [1], following the energy conservation method and 2 It should be noted that the first M 1 values of u 2 (n) matches up with those of u 1 (n).

assuming Gaussian input and independent background noise, the steady-state EMSE of an NLMS filter is calculated as EMSE i,min ( ) = µ iσ 2 2 µ i, (4) where σ 2 represents the variance of the background noise e (n). Equation (4) represents the minimum EMSE achievable when M 1 M. However, in case of undermodeling of the fast component, i.e., M 1 < M, its steady-state EMSE can be calculated as [1], [12] EMSE 1 ( ) = µ 1 σ 2 2 µ 1 }{{} EMSE 1,min( ) ( + ) σ 2 u ĥ 2 1 + µ 1 2 µ 1 }{{} EMSE 1,um( ) where σ 2 u stands for the variance of the input signal and ĥ 2 represents the sum of the energy of the last M M 1 coefficients of h that are not being modeled by w 1 (n). Paying attention to (5), it is clear that if EMSE 1,um ( ) EMSE 1,min ( ), a reduction of the length of the fast component only slightly degrades its performance in terms of steadystate error, reaching EMSE 1 ( ) EMSE 1,min ( ). In the light of (5), this condition will be easily fulfilled when 1) The energy of the last M M 1 coefficients of h is low. 2) The variance of e (n), σ 2, is high, i.e., the SNR is low. 3) The step size of the NLMS filter µ 1 is large. The first and the second conditions make our proposal more interesting for scenarios with low SNR and where the energy of h decreases exponentially, as it is the case of acoustic impulse responses. Following the third condition, in order to diminish the computational cost of the combination, we only reduce the length of the fast component. In addition, this allows that w 1 (n) converges even faster, enhancing the abilities of the combination in terms of speed of convergence. III. EXPERIMENTAL EVALUATION In this section, we provide experimental evidence to show the benefits of our proposal in terms of computational savings associated with a slight degradation of the performance of the combination of filters (more specifically of its fast component). With this aim, we have simulated a stationary plant identification scenario similar to that of an acoustic echo cancellation application, where the unknown RIR to identify is depicted in Fig. 3. As it can be seen, this plant is composed by coefficients, whose energy decreases exponentially. In order to identify this RIR, we design an adaptive convex combination composed by two NLMS filters in order to alleviate the compromise related to the selection of the step size. The first component is adapted with µ 1 =.5, whereas the slower component employs µ 2 = µ 1 /1 =.5. The length of the second component matches that of the unknown plant, i.e., M 2 = M. However, that of the fast component is considered as a parameter, selecting different values M 1 =,, and taps. Adaptation of the mixing parameter λ(n) is carried out by employing a step size µ a = 1 and (5) RIR, h k.6.4.2.2.4.6.8 64 192 32 448 k Fig. 3. Room impulse response employed in the simulations. EMSE( ), [db] 1 2 taps taps 3 taps taps 35 5 1 15 2 25 3 SNR, [db] Fig. 4. Steady-state error of the fast component filter as a function of M 1 and for different SNRs. β =.9. The input signal u(n) is a white Gaussian noise with σ 2 u = 1. Background noise at microphone position e (n) is also white noise uncorrelated with u(n) and with variance chosen so as to result in SNRs in the range 3 db. The figure of merit employed is the EMSE(n), evaluating the expected value averaging over 5 independent realizations of the experiments. In order to test the reconvergence ability of the schemes, an abrupt change in the RIR has been simulated in the middle of the experiment. The change consists of a shift by five taps and the inversion of the sign of the RIR. Fig. 4 shows the steady-state EMSE of the fast component, calculated averaging a thousand of iterations after algorithms convergence, as a function of the SNR. As it can be seen, for high SNRs, the larger the length of w 1 (n), the lower the steady-state error. In this case, the employment of shorter AFs gives rise to an undermodeling that considerably increases the steady-state error. However, for lower SNRs, EMSE 1,um ( ) EMSE 1,min ( ) as a consequence of a higher σ 2 [Eq. (5)], resulting that a reduced length of the AF gives rise to a very slight and acceptable degradation of the steady-state performance. See, for instance, the case of SNR = 1 db, where the performance of a filter with taps is very similar to that of the complete AF ( taps), and that, even for a filter with a half of taps (), the steady-state error only increases two decibels.

it 4 2 2 4 1 6 1 2 34 35 36 37 n 1 3 1 49 5 51 52 53 n, 1 3 5 51 52 53 54 n 1 3 (a) SNR = db (b) SNR = 5 db (c) SNR = 1 db 5 52 51 56 58 6 n 1 3 1 2 55 6 65 7 75 n 1 3 1 2 3 7 72 74 76 78 8 n 1 3 (d) SNR = 15 db (e) SNR = 2 db (f) SNR = 25 db Fig. 5. Reconvergence of the fast filter as a function of SNR and M 1 around the abrupt change in the RIR. Same conclusion about the steady-state performance of w 1 (n) can be drawn paying attention to Fig. 5, where the evolution of EMSE 1 (n) around the abrupt change has been represented as a function of the SNR. However, these panels also show other positive characteristic associated with the reduction of the length of w 1 (n): the speed of convergence of such component increases as the length of the AF decreases. It should be noted that the objective of this component is to provide a fast initial convergence to the combination scheme. Fig. 6 represents the performance of the combination scheme (superior panel) and the evolution of the mixing parameter (inferior panel) when M 1 = M 2 = M and SNR = 1 db. As it can be seen, the combination converges as fast as the fast component and reaches a steady-state error similar to that of the slow component. In this case, the computational cost of the combination is twice that of each AF. Fig. 7 shows the performance of the proposed scheme in this scenario for different values of M 1. As it can be seen, since M 2 does not change, the combination converges to the same steadystate error. However, focusing on the case when M 1 =, the error in the transition stage (where the combination is changing from the behavior of the first to that of the second component) is slightly increased, but the initial convergence of the combination is even faster (Fig. 8). For this case, the cost of the combination is just increased by 25% with respect to a single filter, without requiring any additional mechanism to select the taps to be updated. λ(n) 5 1 2 1.8.6.4.2 µ 1 =.5, M 1 = µ 2 =.5, M 2 = Combination 1 2 3 4 5 6 7 8 9 1 iteration, n 1 4 Fig. 6. Performance of the combination of two NLMS adaptive filters. IV. CONCLUSION AND DISCUSSION Adaptive combination of filters is a simple, but effective method to circumvent the compromises that AFs present. However, if a combination of two filters is used to alleviate the trade-off regarding step-size selection, its computational cost is approximately twice that of each component. In this paper, we have shown that, under certain conditions, it is possible to reduce the computational cost of the combination simply

5 1 2 = = = 1 2 3 4 5 6 7 8 9 1 iteration, n 1 4 Fig. 7. Performance of the combination of two NLMS adaptive filters as a function of M 1. 1 5 6 iteration, n 1 4 = = = Fig. 8. Similar to Fig 7. Zoom around iteration 5. by reducing the length of the fast component, combining then two NLMS filters with different lengths and step sizes. If the filtering scenario presents a low SNR and the energy of the unknown plant to be identified decreases exponentially, reducing the length of the fast component only slightly degrades the performance of the combination in the transition phase due to the undermodeling. However, the computational cost is reduced without any additional operation to choose the taps to update. In addition, the speed of convergence of the combination is initially increased, enhancing the performance of the scheme in nonstationary environments. Although the effective performance of the proposed scheme depends on two important characteristics, namely the shape of h and the value of SNR, there exist different real-world applications where this proposal could be successfully applied. For instance, we can highlight acoustic echo cancellation in hands-free systems installed in cars. In this environment, low SNRs are usual, finding even SNR < db. In addition, the energy of the impulse response from the microphone to the loudspeaker inside a car (since it is an acoustic impulse response) is exponentially decreasing. The reverberation time in a car is small, concentrating most of the energy at the beginning of h (because of the first reflections of the wave against highly-reflective surfaces as glasses) and with a very short initial gap with no energy (as a consequence of small distance between microphone and loudspeaker). The specific characteristics of this environment indicate that our proposal would obtain a suitable performance since the undermodeling carried out by the fast component would have no negative consequences in the performance of the combination, but would speed up the convergence and reduce the computational cost. Following this idea, future work will focus on the evaluation of our scheme for this scenario, but with real signals. For this, we will start measuring the impulse response of several cars to propose specific reductions in the length of the fast component depending on the specific inner space of each vehicle. In addition, we will carry out the implementation of combination scheme in finite precision to facilitate the inclusion of these schemes in real devices. ACKNOWLEDGMENT The work of Azpicueta-Ruiz, and Arenas-García was partly supported by MINECO projects TEC211-2248 and PRI- PIBIN-211-1266. The work of Azpicueta-Ruiz was also been partly supported by FAPESP under Grant 213/1841-5. The work of Nascimento was partly supported by CNPq and FAPESP, grants 32423/211-7 and 211/6994-2. The work of Silva was partly supported by CNPq under Grant 32423/211-7 and FAPESP under Grant 212/24835-1. REFERENCES [1] A. H. Sayed, Adaptive Filters, John Wiley & Sons, NJ, 28. [2] V. H. Nascimento and M. T. M. Silva, Adaptive filters, in Academic Press Library in Signal Processing:, Rama Chellappa and Sergios Theodoridis, Eds., vol. 1, Signal Processing Theory and Machine Learning, pp. 619 761. Chennai: Academic Press, 214. [3] J. Arenas-García, A. R. Figueiras-Vidal, and A. H. Sayed, Mean-square performance of a convex combination of two adaptive filters, IEEE Trans. Signal Process., vol. 54, pp. 178 19, Mar. 26. [4] L. A. Azpicueta-Ruiz, M. Zeller, A. R. Figueiras-Vidal, J. Arenas- García, and W. Kellermann, Adaptive combination of Volterra kernels and its application to nonlinear acoustic echo cancellation, IEEE Trans. Audio, Speech, and Language Process., vol. 19, pp. 97 11, Jan. 211. [5] D. Comminiello, M. Scarpiniti, R. Parisi, and A. Uncini, Combined adaptive beamforming schemes for nonstationary interfering noise reduction, Signal Process., vol. 93, pp. 336 3318, 213. [6] M. T. M. Silva and J. Arenas-García, A soft-switching blind equalization scheme via convex combination of adaptive filters, IEEE Trans. Signal Process., vol. 61, pp. 1171 1182, 213. [7] C. G. Lopes and A. H. Sayed, Diffusion least-mean squares over adaptive networks: formulation and performance analysis, IEEE Trans. Signal Process., vol. 56, pp. 3122 3136, Jul. 28. [8] A. Gonzalo-Ayuso, M. T. M. Silva, V. H. Nascimento, and J. Arenas- García, Improving sparse echo cancellation via convex combination of two NLMS filters with different lengths, in Machine Learning for Signal Processing (MLSP), 212 IEEE International Workshop on, Sept 212, pp. 1 6. [9] L. A. Azpicueta-Ruiz, A. R. Figueiras-Vidal, and J. Arenas-García, On the application of adaptive combination of adaptive filters to acoustic echo cancellation, in Proc. Internoise, New York, NY, USA, 212, pp. 1 5. [1] V. H. Nascimento, M. T. M. Silva, and J. Arenas-García, A low-cost implementation strategy for combinations of adaptive filters, in Proc. IEEE Intl. Conf. Acoustics, Speech, and Signal Process., Vancouver, Canada, 213, pp. 5671 5675. [11] L. A. Azpicueta-Ruiz, A. R. Figueiras-Vidal, and J. Arenas-García, A normalized adaptation scheme for the convex combination of two adaptive filters, in Proc. IEEE Intl. Conf. Acoustics, Speech, and Signal Process., Las Vegas, NV, 28, pp. 331 334. [12] K. Mayyas, Performance analysis of the selective coefficient update NLMS algorithm in an undermodeling situation, Digital Signal Process., vol. 23, pp. 1967 1973, 213.