sine wave fit algorithm

Similar documents
This is the published version of a paper presented at IEEE International Instrumentation and Measurement Technology Conference (I2MTC), 2013.

On Moving Average Parameter Estimation

Frequency estimation by DFT interpolation: A comparison of methods

Expressions for the covariance matrix of covariance data

PARAMETER ESTIMATION AND ORDER SELECTION FOR LINEAR REGRESSION PROBLEMS. Yngve Selén and Erik G. Larsson

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

478 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 2, FEBRUARY 2008

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

Maximum Likelihood Diffusive Source Localization Based on Binary Observations

On Identification of Cascade Systems 1

Optimal Time Division Multiplexing Schemes for DOA Estimation of a Moving Target Using a Colocated MIMO Radar

IMPROVEMENTS IN MODAL PARAMETER EXTRACTION THROUGH POST-PROCESSING FREQUENCY RESPONSE FUNCTION ESTIMATES

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems

EE 230 Lecture 40. Data Converters. Amplitude Quantization. Quantization Noise

Compression methods: the 1 st generation

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

THIRD-ORDER POLYNOMIAL FREQUENCY ESTIMATOR FOR MONOTONE COMPLEX SINUSOID

IEEE copyright notice

EIE6207: Estimation Theory

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Learning Gaussian Process Models from Uncertain Data

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM

Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc

Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation

SPECTRAL ANALYSIS OF NON-UNIFORMLY SAMPLED DATA: A NEW APPROACH VERSUS THE PERIODOGRAM

IN this paper, we consider the estimation of the frequency

Maximum Likelihood Methods in Radar Array Signal Processing

Maximum Likelihood Modeling Of Orbits Of Nonlinear ODEs

EE 435. Lecture 29. Data Converters. Linearity Measures Spectral Performance

SIGNAL PROCESSING ALGORITHMS FOR REMOVING BANDING ARTIFACTS IN MRI

Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications

SPICE : a sparse covariance-based estimation method for array processing

CEPSTRAL ANALYSIS SYNTHESIS ON THE MEL FREQUENCY SCALE, AND AN ADAPTATIVE ALGORITHM FOR IT.

Mean Likelihood Frequency Estimation

Digital Signal Processing

INFLUENCE OF THE SPECTRAL IMAGE COMPONENT ON THE AMPLITUDE AND PHASE ESTIMATORS PROVIDED BY THE INTERPOLATED DFT METHOD

A Linear Estimator for Joint Synchronization and Localization in Wireless Sensor Networks

A Simple Phase Unwrapping Algorithm and its Application to Phase- Based Frequency Estimation

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

Tracking of Spread Spectrum Signals

Unstable Oscillations!

Asymptotic Analysis of the Generalized Coherence Estimate

II. Nonparametric Spectrum Estimation for Stationary Random Signals - Non-parametric Methods -

Various signal sampling and reconstruction methods

Performance Bounds for Polynomial Phase Parameter Estimation with Nonuniform and Random Sampling Schemes

Improved MUSIC Algorithm for Estimation of Time Delays in Asynchronous DS-CDMA Systems

Spectral Analysis of Non-Uniformly Sampled Data: A New Approach Versus the Periodogram

EE 435. Lecture 28. Data Converters Linearity INL/DNL Spectral Performance

/16/$ IEEE 1728

CHOICE OF THE WINDOW USED IN THE INTERPOLATED DISCRETE FOURIER TRANSFORM METHOD

MANY digital speech communication applications, e.g.,

Channel Estimation with Low-Precision Analog-to-Digital Conversion

THE FOURIER TRANSFORM (Fourier series for a function whose period is very, very long) Reading: Main 11.3

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D.

arxiv: v1 [cs.it] 21 Feb 2013

Bispectral resolution and leakage effect of the indirect bispectrum estimate for different types of 2D window functions

Performance Analysis of Spread Spectrum CDMA systems

2290 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 7, JULY 2005

SEISMIC WAVE PROPAGATION. Lecture 2: Fourier Analysis

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES

Bayesian estimation of chaotic signals generated by piecewise-linear maps

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

A New Subspace Identification Method for Open and Closed Loop Data

Sensitivity Considerations in Compressed Sensing

Acomplex-valued harmonic with a time-varying phase is a

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

RECENT results in sampling theory [1] have shown that it

New Algorithms for Removal of DC Offset and Subsynchronous. Resonance terms in the Current and Voltage Signals under Fault.

Pitch Estimation and Tracking with Harmonic Emphasis On The Acoustic Spectrum

Random signals II. ÚPGM FIT VUT Brno,

Recursive Least Squares for an Entropy Regularized MSE Cost Function

System Identification Approach Applied to Drift Estimation.

ω 0 = 2π/T 0 is called the fundamental angular frequency and ω 2 = 2ω 0 is called the

Growing Window Recursive Quadratic Optimization with Variable Regularization

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

Numerical Problems of Sine Fitting Algorithms

LIKELIHOOD-BASED ESTIMATION OF PERIODICITIES IN SYMBOLIC SEQUENCES. Dept. of Mathematical Statistics, Lund University, Sweden

Sinusoidal Modeling. Yannis Stylianou SPCC University of Crete, Computer Science Dept., Greece,

Data Converter Fundamentals

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST For convenience and ease of presentation only, we assume that

On the estimation of initial conditions in kernel-based system identification

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

An Iterative Algorithm for the Subspace Identification of SISO Hammerstein Systems

Simulation studies of the standard and new algorithms show that a signicant improvement in tracking

SENSOR ERROR MODEL FOR A UNIFORM LINEAR ARRAY. Aditya Gadre, Michael Roan, Daniel Stilwell. acas

6.435, System Identification

Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 3, MARCH X/$ IEEE

A/D Converters Nonlinearity Measurement and Correction by Frequency Analysis and Dither

GRIDLESS COMPRESSIVE-SENSING METHODS FOR FREQUENCY ESTIMATION: POINTS OF TANGENCY AND LINKS TO BASICS

LIMBO Self-Test Method using binary input and dithering signals

THE problem of phase noise and its influence on oscillators

EE 230 Lecture 43. Data Converters

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL

Analysis of Finite Wordlength Effects

Estimation Theory Fredrik Rusek. Chapters

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise

EEO 401 Digital Signal Processing Prof. Mark Fowler

Performance Analysis of Coarray-Based MUSIC and the Cramér-Rao Bound

Transcription:

TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for fitting the parameters of a sine wave to noisy discrete time observations. The fit is obtained as an approximate minimizer of the sum of squared errors, i.e. the difference between observations and model output. The contributions of this paper include a comparison of the performance of the four-parameter algorithm in the standard with the Cramér-Rao lower bound on accuracy, and with the performance of a nonlinear least squares approach. It is shown that the algorithm of IEEE-STD-57 provides accurate estimates for Gaussian and quantization noise. In the Gaussian scenario it provides estimates with performance close to the derived lower bound. In severe conditions with noisy data covering only a fraction of a period, however, it is shown to have inferior performance compared with a one-dimensional search of a concentrated cost function. Keywords IEEE standards, Measurement standards, Signal analysis, Frequency estimation, Electronic equipment testing I. Introduction In testing digital waveform recorders, an important part is to fit a sinusoidal model to recorded data, and to calculate the parameters that result in the best fit (in leastsquares). Several algorithms were evaluated in [1], [2], and algorithms have been standardized in IEEE Standard 57 [3]. A survey of the standard can be found in [4]. The standard [3] was prepared by a working group that is a part of the Waveform Measurement and Analysis Technical Committee (TC-) of the IEEE Instrumentation and Measurement Society. TC- is currently working on a standard for analog-to-digital converters (IEEE-STD-1241) [5]. In IEEE-STD-1241, test methods for signal-to-noise and distortion ratio (SINAD) and effective number of bits (ENOB) rely on the sine wave fit [3]. In this paper, the 4-parameter sine wave fit algorithm of [3, 4.1.3.3] is studied in some detail. The considered estimation problem is nonlinear with respect to the parameters, and thus the 4-parameter sine fit algorithm is an iterative method where in each iteration an updated frequency estimate is obtained based on estimated parameters. The algorithm does not exploit the fact that, although it is a nonlinear optimization problem, it is linear in three out of four parameters. Thus, it can be expected that the algorithm of [3] has inferior performance compared to algorithms that utilize such a fact. In particular, one can expect that it may suffer from ill-convergence when the digital waveform recorder utilizes course quantization. In the The author is with the Department of Signals, Sensors and Systems, Royal Institute of Technology, SE- 44 Stockholm, Sweden. This work was supported in part by the Junior Individual Grant Program of the Swedish Foundation for Strategic Research. (email: ph@s3.kth.se). worst case scenario with 1-bit quantization the magnitude of the sine wave amplitude is not observable from data, and thus ill-convergence may occur. The recommended procedure to obtain initial estimates virtually eliminates convergence problems in most practical cases [4]. The main interest in this paper is to study the properties of the obtained frequency estimate. Further, we are interested in the small error properties, i.e. performance of the algorithm when it is properly initialized with initial values close to the correct ones. The contributions of this paper include an alternative derivation of the 4-parameter matrix algorithm [3, 4.1.3.3] in Section III. In Section IV, an alternative nonlinear least-squares algorithm with improved convergence properties is derived. This algorithm utilizes the fact that the error criterion that is minimized can be concentrated with respect to three of the parameters. A theoretical performance assessment is performed in Section V, where the Cramér-Rao bound for this estimation problem is derived when the additive noise is Gaussian. In Section VI, some illustrative numerical examples are presented. The conclusions are drawn in Section VII. II. Measurements and data model Assume that the data record contains the sequence of samples y 1,..., y N (1) taken at time instants t 1,..., t n,..., t N. It is further assumed that data can be modeled by y n [A, B, C, ω] = A cos ωt n + B sin ωt n + C (2) where A, B are C are unknown constants. The angular frequency ω may be known, or not, leading to models with three or four parameters, respectively. Equivalently, we may write y n [θ] in (2) as y n [θ] = α sin(ωt n + φ) + C where A = α sin φ, B = α cos φ and θ is a vector of unknown parameters. Stressing the dependence of y n [θ] on the generic parameter vector θ turns out to be convenient for the following discussion. III. Algorithms of IEEE-STD-57 The Standard 57 provides algorithms both for 3- parameter (known frequency) and 4-parameter (unknown frequency) models. For easy reference, the 3-parameter algorithm is reviewed below [3, 4.1.3.1]. It is exploited by the nonlinear least squares fit in Section IV. In Section III-B, a derivation of the 4-parameter algorithm in [3, 4.1.3.3] is provided, a derivation that differs from the one in [3]. Thus, it is believed to be informative.

2 TECHNICAL REPORT IR-S3-SB-9 A. Known frequency When the frequency ω is known, estimates of the unknown parameters in θ = x = (A, B, C) T (where T denotes transpose) are obtained by a least squares fit. Gather the data record in y, y = (y 1,..., y N ) T. (3) Then, y obeys the linear set of equations y = Dx (4) where D is the N 3 matrix cos ωt 1 sin ωt 1 1 cos ωt 2 sin ωt 2 1 D =.... (5) cos ωt N sin ωt N 1 Equation (4) is an overdetermined (i.e. N > 3) set of linear equations, with the least-squares solution ˆx (in general, ˆ denotes an estimate) given by [3] B. Unknown frequency ˆx = (D T D) 1 D T y. (6) Assuming that an estimate in iteration i, say ˆω i, of ω is available, a Taylor series expansion around the estimate ˆω i gives and cos ωt n cos ˆω i t n t n sin ˆω i t n ω i (7) sin ωt n sin ˆω i t n + t n cos ˆω i t n ω i (8) where ω i = ω ˆω i. Inserting (7)-(8) into (2) gives y n [θ] A cos ˆω i t n + B sin ˆω i t n + C At n ω i sin ˆω i t n +Bt n ω i cos ˆω i t n (9) where θ = x i is the parameter vector x i = (A, B, C, ω i ) T. () Equation (9) is still nonlinear in the parameters, but may be linearized using the observation that ω i. Putting available estimates of A and B from previous iteration, i.e. Â i 1 and ˆB i 1, in place of the unknown parameters in the two last terms in (9) results in an equation linear in the components of x i. Gathering the data record in y gives, similarly as in (4) y = ˆD i x i (11) TABLE I IEEE-STD-57 for four parameter least squares fit to sine wave data using matrix operations. a) ˆx = (A, B, C, ) T, ˆω b) next iteration i = i + 1 c) ˆω i = ˆω i 1 + ˆω i 1 d) create ˆD i, see (12) e) ˆx i = ( ˆD T i ˆD i ) 1 ˆDT i y f) optional [3] g) repeat b)-f) until convergence The basic idea behind the algorithm in [3, 4.1.3.3] is repeatedly to solve the linear system (11), i.e. at iteration i use ˆD i in order to obtain a new set of estimates ˆx i. The algorithm is summarized in Table I. The initial estimates ˆx and ˆω are, for example, obtained by peak-picking the Discrete Fourier Transform (DFT) of data, followed by a prefit using the algorithm in Section III-A. Alternative methods for finding initial estimates are discussed in [1]. The algorithm in Table I is an iterative process to find the parameters that minimize the sum of squared differences, i.e. N (y n y n [θ]) 2. (13) In (13), y n [θ] is given by (2). In each iteration, an updated frequency estimate ˆω i is obtained based on ˆω i 1 and ˆω i 1, estimates that also depend on estimated values of A and B. The recommended procedure to obtain initial estimates virtually eliminates convergence problems in most practical cases [4]. However, convergence problems may occur, especially for short noisy data records or signals at low frequency (see, Section VI). An alternative to the 4- parameter algorithm in [3] is derived below. IV. Nonlinear least-squares by grid search Using (4)-(5) the criterion (13) can be rewritten as (y Dx) T (y Dx) (14) where x = (A, B, C) T, and D is defined in (5). The criterion (14) can be concentrated with respect to x, and ω can be found by a one-dimensional search for the maximum of g(ω), i.e. [6], [7] ˆω = arg max g(ω) (15) ω where ˆD i the N 4 augmented D matrix with an extra column ˆD i = (12) cos ˆω i t 1 sin ˆω i t 1 1 Âi 1t 1 sin ˆω i t 1 + ˆB i 1 t 1 cos ˆω i t 1 cos ˆω i t 2 sin ˆω i t 2 1 Âi 1t 2 sin ˆω i t 2 + ˆB i 1 t 2 cos ˆω i t 2........ cos ˆω i t N sin ˆω i t N 1 Âi 1t N sin ˆω i t N + ˆB i 1 t N cos ˆω i t N where g(ω) = y T D(D T D) 1 D T y. (16) The three parameters in x are then obtained from the leastsquares fit in Section III-A with ω there replaced by ˆω given by (15). The maximization of (16) may be implemented by iterative methods, or simply by a one-dimensional grid search, as presented in Table II. The frequency grid may

DEPARTMENT OF SIGNALS, SENSORS AND SYSTEMS, ROYAL INSTITUTE OF TECHNOLOGY, JUNE 3 TABLE II A non-linear least-squares fit by grid search. a) frequency grid ω i, i = 1,..., M b) for i = 1 to M c) create D from ω i, see (5) d) g i = y T D(D T D) 1 D T y e) end f) ˆω = ω k where g k = max[g i i = 1,..., M] g) ˆx is obtained from (6) be obtained by peak-picking the DFT of data and with ω 1 corresponding to the frequency bin to the left of the maxima, and ω M corresponding to the bin on the opposite side. The number of grid points M is chosen depending on the desired resolution. V. Cramér-Rao bound The algorithms in Tables I-II are expected to produce consistent estimates of the unknown parameters, that is the estimation error is small for large N. In the next section, we assess the performance of the considered algorithms by comparing their frequency error variance with the Cramér- Rao bound (CRB) [7]. A lower bound on the variance of any unbiased estimator is given by the CRB. Assume that data are given by z n = y n [θ] + e n (17) where e n is zero-mean white Gaussian noise with variance σ 2. The CRB is given by the inverse of the information matrix I(θ). The elements of I(θ), i.e. [I(θ)] k,p are given by [7] [I(θ)] k,p = 1 σ 2 N. (18) θ k θ p With θ = (A, B, C, ω) T (that is, for k, p = 1,..., 4) the derivatives in (18) can be calculated from (2) as follows A B C ω Thus, I(θ) can be written as where I(θ) = 1 σ 2 N = cos ωt n (19) = sin ωt n () = 1 (21) = At n sin ωt n + Bt n cos ωt n. (22) I AA I AB I AC I Aω I AB I BB I BC I Bω I AC I BC I CC I Cω I Aω I Bω I Cω I ωω (23) I AA = cos 2 ωt n (24) I BB = sin 2 ωt n (25) I CC = 1 (26) I ωω = ( At n sin ωt n + Bt n cos ωt n ) 2 (27) I AB = cos ωt n sin ωt n (28) I AC = cos ωt n (29) I BC = sin ωt n (3) I Aω = cos ωt n ( At n sin ωt n + Bt n cos ωt n ) (31) I Bω = sin ωt n ( At n sin ωt n + Bt n cos ωt n ) (32) I Cω = At n sin ωt n + Bt n cos ωt n. (33) One can note that the information matrix is independent of the offset C. We are mainly interested in the frequency estimation error, viz. CRB(ˆω) = [I(θ) 1 ] 4,4. Decompose I(θ) as ( ) I1 I I(θ) = 12 I T (34) 12 I 22 where I 1 is the upper left 3 3 matrix and I 22 = I ωω /σ 2. Then [8] CRB(ˆω) = [I(θ) 1 ] 4,4 = 1 I 22 I T 12 I 1 1 I. (35) 12 For uniform sampling t n = n/f s with f s being the sampling frequency, an approximation of the CRB of the absolute frequency ˆf = f s ˆω/2π for frequencies f well inside (, f s /2) and large N is CRB( ˆf) = ( ) 2 fs 2σ 2 12 2π (A 2 + B 2 ) N(N 2 1) ( ) 2 fs 12 2π SNRN(N 2 1) (36) where SNR denotes the signal-to-noise ratio, i.e. SNR = (A 2 + B 2 )/2σ 2. The asymptotic result (36) only depends on the SNR. In particular, it is independent of absolute frequency and initial phase of the sine wave. A. Monte Carlo simulations VI. Numerical examples In order to verify the performance of the considered algorithms, the theoretical CRB is compared with the normalized sum of squared errors obtained from Monte Carlo simulations, based on independent realizations. The empirical mean square error (mse) is computed as mse( ˆf) = 1 ( ˆf l f) 2 (37) l=1 where ˆf l denotes the frequency estimate (f = f s ω/2π) in the l-th realization. B. Data records with random initial phase In Figure 1, the 4-parameter matrix algorithm in [3] (Table I) and the nonlinear least-squares (NLS) method in

4 TECHNICAL REPORT IR-S3-SB-9 N=16, SNR= db N=16, SNR= db 5 6 LOG(MSE) (db) 4 3 IEEE 57 NLS AS CRB LOG(MSE) (db) 5 4 3 IEEE 57 NLS Exact CRB.5.1.15.2.25.3.35.4.45.5 Fig. 1. Mean square frequency estimation error versus frequency for noisy sinusoidal signal with random initial phase..1.2.3.4.5.6.7.8.9.1 Fig. 2. Mean square frequency estimation error versus frequency for noisy sinusoidal signal with fixed initial phase. Table II are evaluated for short records of N = 16 noisy samples, with sampling frequency f s = 1. A sinusoidal signal with random (uniformly distributed in [, π]) initial phase in additive white Gaussian noise was generated, with a SNR of db. As estimator of the initial frequency, a DFT-based estimator was used with 4 times zero-padding, and peakfinding by triple parabolic interpolation. The grid search for the algorithm in Table II was performed (rather arbitrary) in M = 16 points in the symmetric interval (of length corresponding to twice the frequency resolution of the DFT) around the maxima of the DFT. The initial estimates of the nuisance parameters required for the algorithm in Table I were estimated using the 3-parameter fit with ˆω in place of ω. The 4-parameter matrix algorithm was aborted after 5 iterations. Figure 1 shows the mse (37) as a function of frequency f. As reference the asymptotic CRB (36) is shown. From the figure one may note an excellent performance of both algorithms for frequencies well inside (,.5). For frequencies near or.5, the proposed algorithm in Table II outperforms the one in [3]. C. Data records with fixed initial phase In order to compare the performance of algorithms with the exact CRB (35) the above experiment was repeated for low-frequency signals (f <.1). The conditions were set as in the experiment above, but now the initial phase φ was fixed and set (rather arbitrary) to φ = π/7. The comparison with the exact CRB in Figure 2 reveals that the empirical mse can be seen as an empirical variance for frequencies above f =.3 (for NLS) and f =.45 (for IEEE-57), respectively. For frequencies below f =.3, the empirical mse increases for both methods, indicating that for (very) low frequencies both methods suffer from bias errors. For N = 16, the frequency f =.3 corresponds to less than half a period of a uniformly sampled sine wave. LOG(MSE) (db) 6 5 4 3 N=16, SNR= db IEEE 57 (2 BITS) NLS (2 BITS) IEEE 57 (1 BIT) NLS (1 BIT).1.2.3.4.5.6.7.8.9.1 Fig. 3. Mean square frequency estimation error versus frequency for quantized noisy data (1 and 2 bits, respectively). D. Quantized data records The sensitivity for course quantization is studied in this experiment. The noisy sine wave in Section VI-B, i.e. with random initial phase, is quantized with one and two bits, respectively. The results for this scenario are displayed in Figure 3. Similar results as in Figure 2 are obtained. Repeating the experiment with noise-free quantized data results in similar performance as in Figure 3, except for the IEEE 57 algorithm applied to 1-bit data where the performance coincides with the performance of NLS for f >.55 (an improvement compared with f >.65 in Figure 3). VII. Conclusions The four-parameter sine wave fit algorithm in IEEE Standard 57 [3, 4.1.3.3] has been studied in some detail. In most practical applications it seems to be an excellent method for parameter estimation, and for reconstruction of a sine wave from noisy or quantized data. Its performance (in terms of frequency estimation error) has been compared

DEPARTMENT OF SIGNALS, SENSORS AND SYSTEMS, ROYAL INSTITUTE OF TECHNOLOGY, JUNE 5 with theoretical bounds on accuracy, and with the performance of a nonlinear least squares (NLS) approach. The comparison reveals that its performance is close to optimal in a Gaussian scenario, but it also reveals that it is inferior compared with the performance of the NLS in severe experimental conditions, characterized by a small number of samples at a low-frequency sine wave. It is worth noticing that the performance of the 4- parameter algorithm is studied in a small error context, i.e. it relies on precise initial estimates. In this paper, the initial estimates were obtained following the recommended procedure in [3]. In general, its convergence properties depend on the initial estimates, whereas the NLS relies on a one-dimensional grid search, i.e. its convergence is ensured. References [1] J. Kuffel, T. McComb and Malewski, Comparative evaluation of computer methods for calculating the best fit sinusoid to the high purity sine wave, IEEE Transactions on Instrumentation and Measurement, Vol. IM-36, No. 2, June 1987, pp. 418-422. [2] T.R. Mccomb, J. Kuffel, and B.C. Le Roux, A comparative evaluation of some practical algorithms used in the effective bits test of waveform recorders, IEEE Transactions on Instrumentation and Measurement, Vol. 38, No. 1, February 1989, pp. 37-42. [3] IEEE Standard for digitizing waveform recorders, IEEE Standard 57, 1994. [4] T.E. Linnenbrink, Waveform recorder testing: IEEE standard 57 and You, Instrumentation and Measurement Technology Conference, 1995. IMTC/95, pp. 241 246. [5] S.J. Tilden, T.E. Linnenbrink and P.J. Green, Overview of IEEE-STD-1241 - Standard for terminology and test methods for analog-to-digital converters, Proc. 16th IEEE Instrumentation and Measurement Technology Conference, IMTC/99, Vol. 3, 1999, pp, 1498 153. [6] P. Stoica and R. Moses, Introduction to Spectral Analysis, Prentice-Hall, Upper Saddle River, NJ, 1997. [7] S.M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, Prentice Hall, Upper Sadle River, NJ, 1993. [8] T. Söderström and P. Stoica, System Identification, Prentice- Hall, Hemel Hempstead, UK, 1989. Peter Händel(S 88-M 94-SM 98) received the M.Sc. degree in Engineering Physics and the Lic.Eng. and Ph.D. degrees in Automatic Control, all from the Department of Technology, Uppsala University, Uppsala, Sweden, in 1987, 1991, and 1993, respectively. During 1987-88, he was a Research Assistant at The Svedberg Laboratory, Uppsala University. Between 1988 and 1993, he was a Teaching and Research Assistant at the Systems and Control Group, Uppsala University. In 1996, he was appointed as Docent at Uppsala University. During 1993-97, he was with the Research and Development Division, Ericsson Radio Systems AB, Kista, Sweden. During the academic year 96/97, Dr. Händel was a Visiting Scholar at the Signal Processing Laboratory, Tampere University of Technology, Tampere, Finland. In 1998, he was appointed as Docent at the same university. Since August 1997, he has been an Associate Professor with the Department of Signals, Sensors and Systems, Royal Institute of Technology, Stockholm, Sweden. Dr. Händel is a former President of the IEEE Finland joint Signal Processing and Circuits and Systems Chapter. He is a registered engineer (EUR ING).