HARMONIC EXTENSION OF AN ADAPTIVE NOTCH FILTER FOR FREQUENCY TRACKING

Similar documents
15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, September 3-7, 2007, copyright by EURASIP

A FAST AND ACCURATE ADAPTIVE NOTCH FILTER USING A MONOTONICALLY INCREASING GRADIENT. Yosuke SUGIURA

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS

A SPARSENESS CONTROLLED PROPORTIONATE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS

Performance Comparison of a Second-order adaptive IIR Notch Filter based on Plain Gradient Algorithm

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES

Title without the persistently exciting c. works must be obtained from the IEE

Analysis of stochastic gradient tracking of time-varying polynomial Wiener systems

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

FIR BAND-PASS DIGITAL DIFFERENTIATORS WITH FLAT PASSBAND AND EQUIRIPPLE STOPBAND CHARACTERISTICS. T. Yoshida, Y. Sugiura, N.

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo

An Adaptive Sensor Array Using an Affine Combination of Two Filters

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

Reduced-cost combination of adaptive filters for acoustic echo cancellation

Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation

EIE6207: Estimation Theory

Pitch Estimation and Tracking with Harmonic Emphasis On The Acoustic Spectrum

Signal Denoising with Wavelets

Adaptive Filtering Part II

On the Stability of the Least-Mean Fourth (LMF) Algorithm

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague

Levinson Durbin Recursions: I

Performance Analysis of an Adaptive Algorithm for DOA Estimation

Levinson Durbin Recursions: I

FAST IMPLEMENTATION OF A SUBBAND ADAPTIVE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

EE482: Digital Signal Processing Applications

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Adaptive Filter Theory

ANALYSIS OF STEADY-STATE EXCESS MEAN-SQUARE-ERROR OF THE LEAST MEAN KURTOSIS ADAPTIVE ALGORITHM

SYSTEM RECONSTRUCTION FROM SELECTED HOS REGIONS. Haralambos Pozidis and Athina P. Petropulu. Drexel University, Philadelphia, PA 19104

ELEG 833. Nonlinear Signal Processing

An overview on optimized NLMS algorithms for acoustic echo cancellation

Co-Prime Arrays and Difference Set Analysis

A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS. Koby Todros and Joseph Tabrikian

COMPLEX WAVELET TRANSFORM IN SIGNAL AND IMAGE ANALYSIS

Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc

Introduction to Acoustics Exercises

Fourier Analysis of Stationary and Non-Stationary Time Series

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

GAUSSIANIZATION METHOD FOR IDENTIFICATION OF MEMORYLESS NONLINEAR AUDIO SYSTEMS

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

EE482: Digital Signal Processing Applications

Identification of ARX, OE, FIR models with the least squares method

SAMPLING JITTER CORRECTION USING FACTOR GRAPHS

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES

Optimal Speech Enhancement Under Signal Presence Uncertainty Using Log-Spectral Amplitude Estimator

Title algorithm for active control of mul IEEE TRANSACTIONS ON AUDIO SPEECH A LANGUAGE PROCESSING (2006), 14(1):

Ch4: Method of Steepest Descent

Which wavelet bases are the best for image denoising?

Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

ECE534, Spring 2018: Solutions for Problem Set #5

Adaptive Systems Homework Assignment 1

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

Simulation studies of the standard and new algorithms show that a signicant improvement in tracking

Fast adaptive ESPRIT algorithm

Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

OPTIMAL SURE PARAMETERS FOR SIGMOIDAL WAVELET SHRINKAGE

sine wave fit algorithm

Efficient Use Of Sparse Adaptive Filters

Decision Weighted Adaptive Algorithms with Applications to Wireless Channel Estimation

Frequency estimation by DFT interpolation: A comparison of methods

Analysis of Frequency Estimation MSE for All-pass-Based Adaptive IIR Notch Filters With Normalized Lattice Structure

A FEEDBACK STRUCTURE WITH HIGHER ORDER DERIVATIVES IN REGULATOR. Ryszard Gessing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

Algebra of Random Variables: Optimal Average and Optimal Scaling Minimising

Algebra of Random Variables: Optimal Average and Optimal Scaling Minimising

THIRD-ORDER POLYNOMIAL FREQUENCY ESTIMATOR FOR MONOTONE COMPLEX SINUSOID

Iterative Algorithms for Radar Signal Processing

Part 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

Transient Disturbance Detection for Power Systems with a General Likelihood Ratio Test

On the Estimation of the Mixing Matrix for Underdetermined Blind Source Separation in an Arbitrary Number of Dimensions

STEADY-STATE MEAN SQUARE PERFORMANCE OF A SPARSIFIED KERNEL LEAST MEAN SQUARE ALGORITHM.

Time-Delay Estimation *

Multidimensional Sinusoidal Frequency Estimation Using Subspace and Projection Separation Approaches

III.C - Linear Transformations: Optimal Filtering

On the Use of A Priori Knowledge in Adaptive Inverse Control

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

Wavelet de-noising for blind source separation in noisy mixtures.

Frequency estimation uncertainties of periodic signals

Digital Filters Ying Sun

Enhancement of Noisy Speech. State-of-the-Art and Perspectives

Identification of Damping Using Proper Orthogonal Decomposition

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

arxiv: v1 [cs.sd] 28 Feb 2017

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

On the Behavior of Information Theoretic Criteria for Model Order Selection

Transcription:

6th European Signal Processing Conference (EUSIPCO 8), Lausanne, Switzerland, August 5-9, 8, copyright by EURASIP HARMONIC EXTENSION OF AN ADAPTIVE NOTCH FILTER FOR FREQUENCY TRACKING Yann Prudat, Jérôme Van Zaen, and Jean-Marc Vesin Signal Processing Institute, Swiss Federal Institute of Technology EPFL, 5, Lausanne, Switzerland phone: + (4) 6969, fax: +(4)6976, email: yann.prudat@epfl.ch ABSTRACT Adaptive frequency tracking is useful in a broad range of applications, and many schemes have been introduced in the recent years for that purpose. Starting from the observation that, in some situations, an harmonic component is also present in the signal of interest, we propose in this paper to extend frequency tracking to harmonic frequencies in order to improve convergence properties. We apply this idea to a specific adaptive frequency tracking algorithm and obtain through theoretical analysis and computer simulations the performance gain of this new scheme.. INTRODUCTION Adaptive frequency tracking of noisy sinusoidal components with time-varying amplitudes and frequencies,, ] is a useful tool for many applications such as communications, speech processing or biomedical engineering. In many cases, the signal contains a sinusoidal component at a fundamental frequency, and also some harmonic content. Usually, this harmonic component is not used by the frequency tracking algorithms and is considered as a noisy component which can decrease the performances of the algorithm. However, this harmonic component obviously contains information related to the fundamental frequency of the signal. If this information is exploited, it can certainly improve the tracking performance in terms of convergence speed and frequency estimation variance. In this paper, we present an algorithm using this additional information for the frequency tracking of the sinusoidal component of a signal. This algorithm is an extension of the algorithm proposed by Liao ]. It is based on the combination of the discrete oscillator model and a line-enhancer filter using a mean-square error (MSE) cost function. We propose to add to the basic algorithm a constraint linking the fundamental frequency to its harmonics. Nehorai et al. 5] proposed a comb filter algorithm for harmonic enhancement and estimation. However, the rigid constraint proposed in this work imposes the initial conditions to be close to the steady-states values, due to a possible frequency mismatch at convergence. The flexibility of the constraint in our algorithm makes it able to converge from almost all initial values. Moreover, the use of a soft constraint is surely advantageous during transitions or non-stationary periods the harmonic relation between the fundamental and second harmonic may be not perfect.. OSCILLATOR BASED ADAPTIVE NOTCH FILTER The algorithm extended in this paper is called the OSC-MSE ANF (oscillator based mean square error adaptive notch fil- Figure : Structure of the OSC-MSE ANF algorithm. ter) algorithm ]. Its structure is shown in Figure. The input signal, u(n), can be represented as u(n)=d(n)+w(n) () d(n) is the sinusoid at pulsation ω and w(n) is an additive, zero-mean, i.i.d. noise. The sinusoidal component of u(n) should satisfy the oscillator equation d(n) =cosω d(n ) d(n ) α d(n ) d(n ). () The adaptive coefficient α(n), which tracks α = cosω, determines the central frequency of the bandpass filter (BPF). Its transfer function is defined as H BP (z;n)= β z α(n) + β ]z + β z () β ( < β < ) controls the bandwidth. The reference signal, x(n), on which the adaptive mechanism is based, is defined as the recursive part of the BPF: x(n)=α(n) + β ]x(n ) β x(n )+u(n). (4) In short, the adaptive algorithm is driven by a lineenhanced version of the input signal. In the OSC-MSE, the goal is to determine the value for α(n + ) which satisfies the discrete oscillator model. This is done by minimizing the following cost function: J = E { x(n) α(n + )x(n )+x(n )] }. (5) By setting J/ α(n + ) =, the optimal solution for this MSE criterion is E{x(n )x(n)+x(n )]} α(n + )= E{x. (6) (n )}

6th European Signal Processing Conference (EUSIPCO 8), Lausanne, Switzerland, August 5-9, 8, copyright by EURASIP However this expression for α(n + ) is not real-time computable. Thus the numerator and the denominator are replaced by their exponentially weighted time-average estimation, and the coefficient-updating algorithm becomes α(n + )= Q x(n) P x (n) Q x (n) =δq x (n )+( δ)x(n )x(n)+x(n )] P x (n) =γp x (n )+( γ)x (n ). The parameters δ and γ are the forgetting factors ( < δ,γ < ). They control the convergence rate of this adaptive algorithm.. EXTENSION TO HARMONIC FREQUENCIES Suppose now that the input signal is defined as follows, (7) u(n)=d (n)+d (n)+w(n) (8) d (n) and d (n) are two sinusoids at pulsation ω and ω with arbitrary initial phases, and w(n) is an additive, zero-mean, i.i.d. noise. The frequencies of the sinusoidal components are harmonic, so they should satisfy ω = ω. Moreover the discrete oscillator model () should be satisfied by d (n) and d (n): d i (n)=cosω i d i (n ) d i (n ), i =,. (9) The input signal is filtered by two different BPFs (one for each sinusoid) to obtain two reference signals x (n) and x (n). Each BPF has a notch at the other BPF central frequency in order to avoid the disturbance caused by the other sinusoid, an idea introduced in 4] in a slightly different context. The central frequencies and the notch positions of the BPFs are determined by the two adaptive coefficients α (n) and α (n). The transfer functions of the filters are H i (z;n)= α i(n)z + z α i (n) + β ]z, i =,, + β z the i index is used to indicate for i = and for i = (same notation throughout the text). Thus the reference signals are x i (n) =α i (n) + β ]x i (n ) β x i (n ) + u(n) α i (n)u(n )+u(n ), i =,. The relation between the harmonic frequencies becomes α = α, α = cosω and α = cosω. This relationship is used to impose the following constraint on the adaptive coefficients: α (n) α (n) ] =. () This extended algorithm will try to minimize the following cost function, J = E { x (n) α (n)x (n )+x (n )] } + E { x (n) α (n)x (n )+x (n )] } + λ α (n) α (n) ] () λ (λ ) determines the influence of the constraint. This constraint, which does not impose strictly the equality in (), gives more flexibility, for instance in the initialization phase (it is possible to initialize the algorithm with ω ()= and ω ()=π). The adaptive algorithm is extended as α i (n + )= Q x i (n) P xi (n) λc i(n), i =, () Q xi (n) =δq xi (n )+( δ)x i (n ) x i (n)+x i (n )], i =, P xi (n) =γp xi (n )+( γ)x i (n ), i =, C (n) = α(n) α (n) ] 4α (n) C (n) = α (n) α (n) ]. The parameters δ and γ are the forgetting factors, C (n) and C (n) are obtained by differentiation of the constraint. 4. PERFORMANCE ANALYSIS The following vectorial notation is introduced: ( α (n) α(n)= α (n) ) ( ) α, α =. α The coefficient-updating algorithm () can be rewritten, by letting γ = δ, in a recursive form, i.e. α(n + ) =F(α(n)) with F(α(n)) = P x (n ) P x (n) δα (n)+ δ P x (n) χ (n) λc (n) P x (n ) P x (n) δα (n)+ δ P x (n) χ (n) λc (n) χ i (n)=x i (n )x i (n)+x i (n )], i =,. The cost function () has a global minimum. Indeed if α(n)=α, it will be equal to zero, since the oscillator model and the constraint will be satisfied. Assuming that the algorithm is near convergence, it is possible to use a Taylor expansion of F(α(n)) around α. Thus the coefficients update is approximated as α(n + ) F(α )+J(α )α(n) α ] () J(α(n)) is the Jacobian matrix of F(α(n)). The values of J(α ) and F(α ) are expressed as P x (n ) J(α )= P x (n) δ 6λα 4λα P x (n ), 4λα P x (n) δ λ F(α )= α + δ P x (n) x (n ) x (n) α x (n )+x (n )] α + δ P x (n) x (n ) x (n) α x (n )+x (n )].

6th European Signal Processing Conference (EUSIPCO 8), Lausanne, Switzerland, August 5-9, 8, copyright by EURASIP 4. Bias Analysis By using the facts that the correlation between x i (n) α i x i (n )+x i (n )] and x i (n ), i =, is negligible and E{x i (n) α i x i (n )+x i (n )} =, i =,, and the assumption that P xi (n) R xi (), i =, for n large, as it was done in ], () becomes E{α(n + ) α } Ĵ E{α(n) α }, (4) ( δ 6λα Ĵ = 4λα 4λα δ λ ). The eigenvalues of Ĵ are μ = δ and μ = δ λ + 6α ]. The algorithm will converge if μ and μ are smaller than one. This condition on the convergence yields the following constraint on the parameters: λ + 6α ] < + δ. (5) Thus, if this constraint is satisfied, the coefficient-updating algorithm will converge to α. Hence it is unbiased, like the original algorithm. 4. Variance Analysis In order to perform the variance analysis for this extended algorithm, () is rewritten as α d (n + ) J(α )α d (n)+γ(n) (6) α d (n)=α(n) α and Γ(n)=F(α ) α. Assuming that P xi (n) R xi (), i =, for n large and applying the change of basis α d (n)=v χ(n), (6) becomes χ(n + ) V T ĴV χ(n)+v T Γ(n)=Mχ(n)+V T Γ(n) (7) M is the diagonal matrix composed of the eigenvalues μ and μ, and V is the matrix of corresponding eigenvectors. By using the same assumptions as for the bias analysis, and the assumption that χ i (n) and x j (m) are not correlated, n,m and i, j =,, and applying the same approximations as in ], the variances of χ (n) and χ (n) are evaluated as ( δ) Δωneq R χ () = δ 6 + 6α α + 6α α A (8) A + Δω σ neq + Δω σ neq ( δ) Δωneq R χ () = δ λ ( + 6α )] 6 + 6α 6α α + A + Δω σ neq α A σ + Δω neq (9) A i is the amplitude of the sinusoid d i (n), σ is the noise variance, and Δω neq = π β +β is the noise equivalent bandwidth of the BPFs. This value is computed with respect to (), under the hypothesis that the notches added in the extension have almost no influence on Δω neq. Estimation variance.5.5.5 x 6 Extension ω Extension ω.8.85.9.95.5. λ(+6cos (ω )) Figure : Estimation variance of the extended algorithm around the bound (5) for an SNR value of 5 db (β =.95, δ =.99, ω =.4π, ω =.8π, A /A = ). The variances of α d (n) and α d (n) are obtained as linear combinations of R χ () and R χ (): R αdi ()= + 6α Rχi ()+6α R χ i () ] () for i =,. And using the same approximation as in ], the variances of the frequency estimates are R ωdi ()= R α di () sin ω i, i =, () ω di (n)=ω i (n) ω i, i =,. 5. SIMULATION RESULTS In the simulations, the steady-state performance of the OSC- MSE ANF algorithm (7) and its extension to harmonic frequencies are evaluated. The input signal is a single sinusoid embedded in white noise for OSC-MSE ANF, and a sum of two sinusoids at fundamental and second harmonic frequencies with additive white noise for the extension. For both algorithms, the noise variance and the amplitude of the fundamental frequency sinusoid are the same. For the extended algorithm, the amplitude of the harmonic component is set to A = A/, were A is the amplitude of the fundamental component. The frequencies of the fundamental and harmonic sinusoidal components are.4π and.8π respectively. Estimation variance and bias were computed on the last of frequency estimated values and averaged over runs. For each simulation, the forgetting factors δ and γ of the algorithms (7) and () were equal. The first point to verify is the condition for convergence (5). The value of λ + 6α ] is increased from.78 to. in steps of. by varying λ (λ varies from.7 to.85). Thus the algorithm behavior around the bound + δ for δ =.99 can be investigated. The SNR is equal to 5 db. The estimation variance is shown in Figure. The simulations confirm that the condition for stability (5) is correct.

6th European Signal Processing Conference (EUSIPCO 8), Lausanne, Switzerland, August 5-9, 8, copyright by EURASIP 6 x 7 5 4.5 x 4 4.5 OSC MSE Extension ω Extension ω Estimation variance 4 Variance.5.5.5 5 5 SNR db]....4.5.6.7.8 λ Figure : Evolution of the estimation variance of the harmonic extension for several λ values for an SNR value of 5dB(β =.9, δ =.95, ω =.4π, A /A = ). Bias π.5.5.5 x Extension ω Extension ω OSC MSE.4π OSC MSE.8π.5 5 5 SNR db] Figure 4: Frequency estimation bias comparison between OSC-MSE ANF and its harmonic extension for different SNR values (β =.9, δ =.95, λ =., ω =.4π, ω =.8π, A /A = ). We have to choose λ between zero (no constraint) and the bound given by (5), i.e. λ max =.78. To do so, we studied the influence of the λ value on the estimation variance of the fundamental frequency component for the extended algorithm. The results obtained for λ values ensuring stability are shown on Figure (β =.9, δ =.95, SNR = 5 db). First, we can see that the variance increases near the bounds. For a small λ value, the trackings of the fundamental and the harmonic frequency become almost independent and thus the variance increases. For λ values close to the bound obtained in (5), the algorithm is nearly instable, so the estimation variance increases, as shown in Figure. For intermediary values, the estimation variance is almost constant. For all the following simulations, we use λ =.. The estimation bias of the extended algorithm for the fundamental and harmonic frequencies is displayed on Figure 4. The estimation bias of the OSC-MSE ANF is also displayed for frequencies of.4π and.8π. The estimation bias for the fundamental frequency is almost the same for both algorithm, but for the harmonic frequency, the bias is a lot smaller Figure 5: Frequency estimation variance comparison between OSC-MSE ANF and its harmonic extension for different SNR values (β =.9, δ =.95, λ =., ω =.4π, ω =.8π, A /A = ). for the extended algorithm. This can be explained by the constraint linking the estimated frequencies of the fundamental and harmonic components. Indeed, the constraint tends to force a harmonic relationship between the estimations and thus, as the bias of the OSC-MSE ANF for.4π and.8π are of opposite signs, they tend to compensate. Figure 5 shows that the estimation variance for the extension is almost two times smaller than OSC-MSE ANF for the fundamental frequency, but it is greater for the harmonic frequency. Nevertheless it is the fundamental frequency that is really important. Thus, the information present in the harmonic component and used by the extended algorithm is useful in order to obtain a more robust frequency estimate. Another important aspect to evaluate is the rate of convergence. A frequency shift from.π and.6π to.4π and.8π is applied after both algorithms have converged, for the fundamental and harmonic components respectively. As before the SNR is computed with respect to the fundamental sinusoid and is equal to 5 db. The parameter δ is different for each algorithm. Indeed its value is chosen in order to ensure that both algorithms have approximately the same estimation variance for the fundamental frequency estimate. Thus δ is set to.95 for the OSC-MSE ANF algorithm and.9 for the extension, and λ is equal to.. The frequency estimates averaged over runs with the frequency shift after 5 samples are shown in Figure 6. One can observe that the convergence of the extended algorithm is faster than the one of the original one. 6. EXTENSION TO THE THIRD HARMONIC FREQUENCY Sometimes it is not the second harmonic frequency that is present, but the third one. The relation between the frequencies is then ω = ω. Thus the constraint on the adaptive coefficients becomes 4α (n) α (n) α (n) ] =. () An algorithm similar to () can be derived by using the same arguments, and the same performance analysis as before can be performed using again a Taylor expansion. Only the condition for stability is presented here. This condition

6th European Signal Processing Conference (EUSIPCO 8), Lausanne, Switzerland, August 5-9, 8, copyright by EURASIP Estimated frequency.5.5 4] A. Rao and R. Kumaresan, Dynamic tracking filters for decomposing nonstationary sinusoidal signals, 995 International Conference on Acoustics, Speech, and Signal Processing, vol., pp. 97-9, 9- May 995. 5] A. Nehorai and B. Porat, Adaptive comb filtering for harmonic signal enhancement, IEEE Trans. Acoustics, Speech and Signal Process., vol. 4, pp. 4-8, Oct 986..5 OSC MSE Extension ω Extension ω 5 55 6 65 7 75 8 Samples Figure 6: Convergence comparison between OSC-MSE ANF and its harmonic extension for an SNR value of 5 db (β =.95, δ =.95 for OSC-MSE ANF, δ =.9 for the extension, λ =., A /A = ). on the parameters is λ + ( ] α ) < + δ. () Thus this extended algorithm can be generalized for any harmonic frequency (with increasing complexity for higher harmonics). 7. CONCLUSION In this paper, we have extended an existing frequency tracking algorithm to take advantage of the information contained in an harmonic component. The obtained estimate of the fundamental frequency is less sensitive to the noise and has an almost twofold reduction of the estimation variance. The estimation bias of the fundamental frequency is almost the same as for the basic algorithm. Indeed, the constraint linking fundamental and harmonic frequency estimates can yield reduction in estimation bias. In both cases, the small bias confirms the theoretical analysis predicting unbiased estimates. Thus, in presence of a harmonic component in the signal of intereset, the extended algorithm performs better than the OSC-MSE ANF algorithm. The flexibility given by the soft constraint can be an advantage if the initialization is arbitrary. The algorithm was given for the presence of the second or third harmonic and can easily extended for other harmonic components. The proposed approach can be generalized to other frequency tracking algorithms. REFERENCES ] H.-E. Liao, Two discrete oscillator based adaptive notch filters (OSC ANFs) for noisy sinusoids, IEEE Trans. Signal Process., vol. 5, pp. 58-58, Feb 5. ] P. Tichavský and A. Nehorai, Comparative study of four adaptive frequency trackers, IEEE Trans. Signal Process., vol. 45, pp. 47-484, June 997. ] M. Niedźwiecki and P. Kaczmarek, Generalized adaptive notch and comb filters for identification of quasiperiodically varying systems, IEEE Trans. Signal Process., vol. 5, pp. 4599-469, Dec 5.