An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals

Size: px
Start display at page:

Download "An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals"

Transcription

1 isid/ms/8/ November 6, 8 statmath/eprints An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals Swagata Nandi Anurag Prasad Debasis Kundu Indian Statistical Institute, Delhi Centre 7, SJSS Marg, New Delhi 6, India

2

3 An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals Swagata Nandi, Anurag Prasad & Debasis Kundu Abstract In this paper we propose a computationally efficient algorithm to estimate the parameters of a -D sinusoidal model in presence of stationary noise. The estimators obtained by the proposed algorithm are consistent and asymptotically equivalent to the least squares estimators. Monte Carlo simulations are performed for different sample sizes and it is observed that the performances of the proposed method are quite satisfactory and they are equivalent to the least squares estimators. The main advantage of the proposed method is that the estimators can be obtained using only finite number of iterations. In fact it is shown that starting from the average of periodogram estimators, the proposed algorithm converges in three steps only. One synthesized texture data and one original texture data have been analyzed using the proposed algorithm for illustrative purpose. Keywords: Sinusoidal signals; Least squares estimators; Asymptotic distribution; Twodimensional frequency estimation; Bayesian Information Criterion; Efficient algorithm. Theoretical Statistics and Mathematics Unit, Indian Statistical Institute, Delhi Center, Pin 6, INDIA. Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Kanpur, Pin 86, INDIA. Corresponding Author: Swagata Nandi, nandi@isid.ac.in.

4 Introduction In this paper we consider the problem of estimating the parameters of the following two dimensional (-D) sinusoidal signal; y(m,n) = A cos(λ m + µ n) + B sin(λ m + µ n)] + X(m,n). () Here A and B are unknown real numbers, known as amplitudes, λ and µ are unknown frequencies. It is assumed that A +B >, and λ,µ (,π). The additive error {X(m,n)} is from a stationary random field. The explicit assumptions on {X(m, n)} and also on the model parameters are provided in section. The main problem is to estimate the unknown parameters, namely A, B, λ and µ, given a sample {y(m,n);m =,...,M,n =,...,N}. The first term on the right hand side of () is known as the signal component and the second term as the noise or error component. The detection and estimation of the signal component in presence of additive noise is an important and classical problem in Statistical Signal Processing. Particularly, the -D sinusoidal model has received a considerable attention in the signal processing literature because of its widespread applicability in texture synthesis. Francos et al. ] first observed that the -D sinusoidal model can be used quite effectively to analyze symmetric texture images. For some of the theoretical developments of the -D sinusoidal or related models, the readers are referred to Rao et al. 8], Zhang and Mandrekar ] and Kundu and Nandi 4]. The -D frequency estimation is well known to be a numerically difficult problem. The problem becomes more severe particularly if p is quite large. The most efficient estimators, as expected are the least squares estimators. The order of convergence of the least squares estimators of λ s and µ s are O p (M 3 N ) and Op (M N 3 ) respectively. Here U = Op (M δ N δ ) means M δ N δ U is bounded in probability. Finding the least squares estimator tends to be computationally intensive as the functions to be optimized are highly non-linear in parameters. Even in one dimension, it is known that the least squares surface has several local minima, see

5 Rice and Rosenblatt 9]. Recently Prasad et al. 6] proposed a sequential procedure to estimate the unknown parameters of one dimensional sinusoidal model and which can be easily extended for model (). It has reduced the computational time considerably. At each stage the standard Newton-Raphson or Gauss-Newton method may be used, for optimization purposes, but the proof of convergence of the Newton-Raphson or Gauss-Newton method is not known and it is not very easy to establish also in this case. In this paper we propose a new algorithm to estimate the unknown parameters of -D sinusoidal model when the number of components is known. This is motivated by the one dimensional algorithms proposed by Bai et al. ], Nandi and Kundu 5] and the one dimensional sequential procedure proposed by Prasad et al. 6]. The method uses correction terms based on the data vector and the available frequency estimators, similarly as the Newton-Raphson or Gauss- Newton method. But naturally the correction term is different. It is possible to choose the initial guesses in such a manner that within fixed number of steps, the iterative procedure produces efficient estimators, which have the same rate of convergence as the least squares estimators. In the proposed algorithm, we do not use the fixed sample size available at each step. At first step we use a fraction of it and at the last step we use the whole data set, by gradually increasing the effective sample sizes. The method can be easily extended for the model (8), when more than one component is present, using the sequential estimation procedure, similarly as in Prasad et al. 6]. It is shown that if we start the algorithm with the average of periodogram estimators (the details will be explained later) as initial guesses, then after three steps, it produces estimators, which have the same order of convergence as the least squares estimators. We perform some simulation studies to examine the behavior of the proposed algorithm for different sample sizes, and also to compare their performances with the least squares estimators. It is observed that the performances of the proposed estimators and the least squares estimators are very similar, in terms of biases and mean squared errors. But the main advantage of the proposed 3

6 estimators is that they can be obtained in three steps only and the computational time is much less compared to the least squares computation. Moreover, the proof of convergence of the least squares method is not available in the literature, but our method produces efficient estimators almost surely from the above mentioned starting values in three steps only. For illustrative purposes, we have also analyzed one real texture data and one synthesized data. It is observed that the performances of the estimators obtained by the proposed method, are quite satisfactory. The rest of the paper is organized as follows. In section, we provide the model assumptions and the algorithm. Numerical results are provided in section 3. The data analysis results are provided in section 4 and the conclusions appear in section 5. All the necessary theoretical results are provided in the appendix. Model Assumptions and Proposed Algorithm. Assumptions In this subsection we provide the necessary assumptions on the model parameters and particularly on the errors. It is assumed that the observed data {y(m,n);m =,...,M,n =,...,N} is of the form (). The additive error {X(m,n)} is from a stationary random field and it satisfies the following Assumption ; Assumption : Let us denote the set of positive integers by Z. It is assumed that {X(m,n);m,n Z} can be represented as follows; X(m,n) = a(j,k)e(m j,n k), j= k= where a(j,k)s are real constants such that a(j,k) <, j= k= 4

7 and {e(m,n);m,n Z} is a double array sequence of i.i.d. random variables with mean zero and finite variance σ.. Proposed Algorithm The proposed algorithm requires initial estimators of the frequencies, which are consistent but their order of convergence may be low. The method to obtain initial estimators will be discussed later in this section. The algorithm gradually improves upon the initial estimators in a finite number of steps. The final estimators, which are obtained at the last step, have the same asymptotic distribution as the least squares estimators. We provide the necessary theoretical results in the following theorem. Theorem. Suppose ( λ, µ) are consistent estimators of (λ,µ ) and ( λ, µ) are obtained from ( λ, µ) using the following equations, ] ] λ = λ + P (λ) M Im MN, µ = µ + P (µ) Q MN N Im MN, () Q MN where, P (λ) MN = M t= s= P (µ) MN = M Q MN = t= s= t= s= and Im.] denotes the imaginary part of a complex number. ( t M ) y(t,s)e i(e λt+eµs), (3) ( s N ) y(t,s)e i(e λt+eµs), (4) y(t,s)e i(e λt+eµs), (5) If λ λ = O p (M δ N δ ) and µ µ = O p (M δ N δ ), where δ i (, ], i =,, then, (i) λ λ = O p (M δ N δ ), if δ 4 and δ > 4, µ µ = O p (M δ N δ ), if δ 4 and δ > 4, (ii) λ λ µ µ T D N (,4σ Σ ), if δ > 4,δ > 4, 5

8 where Σ = c ρ c ρ, D = ρ = A + B and c = M 3 N, M N 3 e i(λ j +µ j ). Proof. See the Appendix. Based on the above result, we provide the algorithm to find efficient estimators of λ s and µ s. The main idea in the algorithm is to use Theorem step by step to improve the estimates. Moreover, we will not use the whole sample size at each step, rather a fraction of it judiciously, similarly as in Bai et al. ] or Nandi and Kundu 5]. Therefore, at the r th step if we use the sample size (M r,n r ), then the r th step estimators λ (r) and µ (r) are computed from the (r ) th step estimators λ (r ) and µ (r ) by; λ (r) = λ (r ) + (λ) P M Mr Im rn r Q MrN r µ (r) = µ (r ) + Nr Im P (µ) M rn r Q MrN r ] ], (6), (7) where P (λ) M rn r, P (µ) M rn r and Q MrN r can be obtained from (3), (4) and (5), by replacing M, N, λ and µ with M r, N r, λ (r ) and µ (r ), respectively. For better understanding, let us look at the algorithm when the initial estimators of λ and µ are of the order O p (M N ) and O p (M N ) respectively, i.e. ( λ () λ ) = O p (M N ) and ( µ () µ ) = O p (M N ). Although a similar algorithm can easily be developed when the initial estimators are of the order O p (M δ N δ ) and O p (M δ N δ ) respectively for any δ i (, ]; i =,. Observe that, it is possible to obtain initial estimators λ, µ of λ and µ respectively, from the data {y(m,n);m =,...,M, n =,...,N}. Let us consider the data vector {y(,n),...,y(m,n)} 6

9 for any fixed n {,...,N}, and model it using the -D sinusoidal model. Suppose the periodogram estimate of λ, over Fourier frequencies, obtained from this data stream is denoted by λ n, which is O p (M ), see Rice and Rosenblatt 9]. We find λ n for n =,...,N separately, and take their average to arrive at the initial estimate λ of λ, i.e. λ = N λ n, n= which is O p (M N ). Similarly, considering the data {y(m,),...,y(m,n)} for m {,...,M} it is possible to obtain the initial estimate µ of µ, which is of the order O p (M N ). Thus, we have initial estimators of λ and µ for which, ( λ λ ) = O p (M N ) and ( µ µ ) = O p (M N ). We start our algorithm with these initial guesses. It may be mentioned that the choice of M and M are not fixed. Now we provide the exact algorithm for λ, and for µ it can be obtained similarly. Algorithm for estimating λ : Step : When r =, choosing M = M.8, N = N, and λ () = λ, where λ is an initial estimator such that ( λ λ ) = O p (M N ) = O p (M 4 N ). Applying part (a) of Theorem, we obtain; λ () λ = O p (M N ) = O p (M 5 N ). Step : When r =, let M = M.9, N = N. λ () λ = O p (M 5 N ) = Op (M 3 N ). Now, by part (b) of Theorem, λ () λ = O p (M 3 N ) = O p (M 7 N ). Step 3: When r = 3, let M 3 = M, N 3 = N. λ () k λ k = O p(m 7 N ) = Op (M 7 3 N 3 ). 7

10 Again, by part (b) of Theorem, M 3 N ( λ (3) λ ) d N (,4σ cρ ). Therefore, it is observed that from the initial estimate λ, of the order of convergence O p (M N ), we obtain after Step, an improved estimator of the order of convergence O p (M 6 5N ). At Step, we have not used the full sample. At Step, the improved estimator has the order of convergence O p (M N 7 ). Finally at Step 3, when we use the complete sample, and obtain the efficient estimator of λ, which has the same order of convergence as the least squares estimators, i.e. O p (M 3 N ). As we had mentioned before that the choice of M and M are not fixed. For example another choice can be M = M.83 and M = M.9. Several other choices are also available, which will produce efficient estimator of λ, which has the same order of convergence as above. It is observed in our simulation experiment that the performance does not depend much on different choices of M and M. Similarly, we can obtain an efficient estimator of µ, which has the same order of convergence as the least squares estimator. Now we describe how to extend our method for multiple components signal..3 More than one Components When there are more than one component present in the signal, the model can be written as p y(m,n) = A k cos(λ k m + µ k n) + B k sin(λ k m + µ k n)] + X(m,n). (8) k= Here the number of components p is assumed to be known and X(m, n) is same as before. We have the following additional assumptions for this model. Assumption. The frequency sets {λ i,µ i } are distinct i.e. for i j, (λ i,µ i ) (λ j,µ j ) and (λ i,µ i ) (,π) (,π) for i =,...,p. 8

11 Assumption. The amplitudes satisfy the following restriction, < A p + B p <... < A + B < K <, for some K >. Using the sequential procedure similarly as suggested in Prasad et al. 6], we can obtain estimators of the parameters of the model in (8). Initial estimators are obtained at each step and improved upon using the proposed algorithm. Proceeding in this way, we can obtain estimators of all the parameters. It may be mentioned that since ( λ i, µ i ) and ( λ j, µ j ) are asymptotically independent for i j, the sequential procedure works as the one dimensional method. 3 Numerical Results In this section, we present some numerical results to see the performance of the proposed algorithm for different sample sizes. All the computations were performed at the Indian Institute of Technology Kanpur, using the random number generator RAN of Press et al. 7]. All the programs are written in FORTRAN-77. We consider the following model; Model: y(m,n) = A k cos(mλ k + nµ k ) + B k sin(mλ k + nµ k )] + X(m,n). k= Here A =.5, B =.5, λ =., µ =., A =., B =., λ =., µ =. and X(m,n) = e(m,n) + e(m,n) + e(m,n ), (9) where e(m,n) s are i.i.d. normal random variables with mean and variance σ. We have considered different sample sizes, M = N = 5, 75,, and the error variance, σ =.5. In each case we have obtained initial guess of frequencies by computing the average of the periodogram estimates. We then apply the proposed algorithm to improve upon the estimates. In each case we have repeated the procedure for times and reported the average estimates and the corresponding mean squared errors of the proposed estimates. We have reported the average 9

12 and the corresponding mean squared errors, of the usual least squares estimates obtained using sequential procedure as proposed in Prasad et al. 6]. To compute the least squares estimates we have used the optimization routine available in Press et al. 7]. For comparison purposes, we have also reported the asymptotic variance of the least squares estimators. The results are reported in Tables - 4. Some of the points are easily noticed from the tables. As the sample size increases, biases and MSEs decrease as expected for both least squares estimators and the estimators obtained using the proposed algorithm. This verifies the consistency property of the estimators. Biases of the linear parameters are more than the non-linear parameters. In both the cases the mean squared errors are quite close to the corresponding asymptotic variances. Although the proposed estimators can be obtained after three steps, but the performance of the proposed estimators are almost same with the least squares estimators and the required computational time also is much less. Moreover, the proposed algorithm does not require any stopping criterion like any other standard optimization method and it is going to converge almost surely. 4 Data Analysis In this section we present two data analysis for illustrative purpose. One is an original texture data analysis and the other is a synthesized texture analysis when the two adjacent frequencysets are close to each other. Real Texture Data We obtained the texture in Figure 3 from pbourke. The usual procedure to get an idea of initial guess of frequencies is to plot the periodogram function of the data, see Figure, and look at the peaks on the surface of periodogram. From an observation of the periodogram we see that there are many adjacent peaks on the surface. It is not easy to guess the correct number of components from the periodogram. Here the number

13 of components is chosen by minimizing the Bayesian Information Criterion (BIC) give as; BIC(k) = MN ln σk + (4k + )ln(mn) () where k is the number of components, σ k is the corresponding innovations variance. We plot BIC(k) as a function of k. It is observed that BIC takes its minimum value at k = 6, see Figure therefore we take the estimate of the number of components p as p = 6. We have fitted the model in () with p = 6 to the the texture data in Figure 3. The original and estimated textures are plotted in Figure 3. It matches with the original texture quite well Figure : Periodogram of the Original Texture Synthesized Texture Data Now we analyze a texture signal generated from the following model for m =,..., and n =,...,; y(m,n) = 5.cos(.5m +.n) + 5.sin(.5m +.n) +.cos(.4m +.9n) +.sin(.4m +.9n) + X(m,n). () The noise structure X(m,n) follows (9) and e(m,n)s have mean and variance.. The noisy texture is plotted in Figure 4 and the original texture (without the noise component X(m,n))

14 . x BIC(k) k Figure : BIC as a function of number of components Figure 3: The Original and Estimated Texture

15 is plotted in Figure 5. The problem is to extract the original texture given only the noisy texture. Note that here the two frequency-sets, viz. (.5,.) and (.4,.) are very close to each other. When we plot the periodogram of the above data, see Figure 6, we observe a single peak. This obscures the fact that originally there were two frequency components and thus makes it difficult to provide correct initial guess of frequencies. But by using our algorithm we obtain the following estimates of the unknown parameters; Â = 4.737, B = 4.999, λ =.55, µ =.3 Â =.79, B =.986, λ =.3995, µ =.9. We have plotted the actual and estimated texture in Figure 7. They match quite well. We would like to mention here that the usual least square method may not work well if the two frequency sets are close and error variance is large. But sequential least squares method, similar to Prasad et al. 6] is able to distinguish the two frequencies (plots are not shown) and provides reasonably good estimates. Figure 4: Synthesized Noisy Texture Figure 5: Synthesized Actual Texture 3

16 8 6 I(λ,µ) λ.5 µ Figure 6: Periodogram of the Synthesized Texture Figure 7: Synthesized Actual and Estimated textures 4

17 5 Conclusions In this paper we have given an efficient and fast algorithm towards estimating the unknown parameters of a -D sinusoidal model. The iterative methods for multi-dimensional optimization take long to converge to an optimal solution. The periodogram estimates have larger bias and mean squared error. The proposed algorithm takes the periodogram estimates as the initial estimates about the frequencies and improves upon it in a finite number of iterative steps. We have done extensive simulations for different sample sizes and increasing error variances, though not reported here, and found that as the sample size becomes large, the method performs increasingly well and the performance is quite satisfactory even for fairly large error variances. We have derived the asymptotic distribution of the proposed estimators; it coincides with the least squares estimators. Since only a finite number of steps are required to reach the final estimators, the algorithm produces very fast results and it can be used for online implementation purpose. The algorithm can be extended even for colored texture also. The work is in progress and it will reported later. Appendix In this Appendix we provide the proof of Theorem. The following two lemmas are required to prove Theorem. Lemma. If Q MN = MN (A ib ) P (λ) MN = i M3 N ( A + B i ] + O p (M δ N δ ) + O p (M δ N δ ), () ( e(m j,n j ) m M ) e i(λ m+µ n) ) + O p (M δ N δ ) + O p (M δ N δ )] ( λ λ ) (3) 5

18 and λ is obtained from λ, which is O p (M δ N δ ), using the following equation, ] λ = λ + P (λ) M Im MN, Q MN then, (i) λ λ = O p (M δ N δ ), if δ 4 and δ > ( ) 4, (ii) M 3 N (ˆλ λ d ) N,4σ c, if δ ρ > 4 and δ > 4, where c and ρ are same as defined in Theorem. Proof. ] λ = λ + P (λ) M Im MN Q MN = λ + M Im MN M e(m j,n j ) ( m M ) e i(λ m+µ n) (A ib ) + O p (M δ N δ ) + Op (M δ N δ )] i(a ib ) M3 N 4 + O ] p(m δ N δ ) + O p (M δ N δ )]( λ λ ) MN (A ib ) + O p (M δ N δ ) + Op (M δ N δ )] = λ + O p (M δ N δ ) + O p (M δ N δ )]( λ λ ) + M e(m j,n j ) ( m M ) e i(λ m+µ n) ] j M Im = j = MN (A ib ) = λ O p (M δ N δ ) + O p (M δ N δ )]( λ λ ) + 4 M 3 N Im (A ib ) e(m j,n j ) If δ 4 and δ > 4, then ( m M ) e i(λ m+µ n) ]. (4) ] O p (M δ N δ ) + O p (M δ N δ ) ( λ λ ) = O p (M δ N δ ) + O p (M δ δ N δ δ ) = O p (M δ N δ ). Note that the last equality follows because δ δ, and due to the same reason, ignoring the last term in (4), we have, λ λ = O p (M δ N δ ). 6

19 If δ > 4 and δ >, then the second term in (4) is ignored and we get, 4 where, V = Im (A ib ) = A + B A + B It can be proved that, ( ) 4 lim Var V = M,N M N 3 λ λ = 4 M 3 N V, 4σ (A + B ) ( e(m j,n j ) m M ) ] e i(λ m+µ n) e(m j,n j ) ( e(m j,n j ) m M ) sin(λ m + µ n) ( m M ) ] cos(λ m + µ n). (5) e i(λ j +µ j ). (6) Now, using the Central Limit Theorem of the stochastic processes, (see Fuller 3]), we have the following, ( M 3 N (ˆλ λ d ) N,4σ c ) ρ. Lemma. If Q MN is same as in () and P (µ) MN = i MN3 ( A + B i ( e(m j,n j ) n N ) e i(λ m+µ n) ) + O p (M δ N δ ) + O p (M δ N δ )] ( µ µ ) (7) and µ is obtained from µ, which is O p (M δ N δ ), using the following equation, then, ] µ = µ + P (µ) N Im MN, Q MN (i) µ µ = O p (M δ N δ ), if δ 4 and δ > ( ) 4 (ii) M N 3 (ˆµ µ d ) N,4σ c, if δ ρ > 4 and δ > 4 where c and ρ are same as before. 7

20 Proof. The Proof is similar. Along the same line as before, if δ > 4 and δ > 4 it can be shown in this case also that where, W = Im (A ib ) = A + B A + B µ µ = 4 MN 3W, Moreover, it also can be shown that, ( ) 4 lim Var 4σ W = M,N M N 3 (A + B ) ( e(m j,n j ) n N ) ] e i(λ m+µ n) e(m j,n j ) ( e(m j,n j ) n N ) sin(λ m + µ n) ( n N ) ] cos(λ m + µ n). (8) e i(λ j +µ j ), (9) and ( 4 lim Cov V, M,N M N 3 4 M N 3 ) W =. () Now, using Lemma, Lemma and (), Theorem follows immediately, provided we show that Q MN, P (λ) MN and P (µ) MN as defined in Theorem, can be written as (), (3) and (7) respectively. We will use the following results in the subsequent proofs. Taylor s Theorem: Suppose f(t) is a real valued function on a,b], n is a positive integer, f (n ) (t) is continuous on a,b], f (n) (t) exists for every t (a,b). Let α,β be distinct points of a,b], then we can write, f(β) = n k= f (k) (α) k! where x is a point on the line joining α and β. e(m,n) = O p (M N ). (β α) k + f(n) (x) (β α) n, () n! 8

21 te(m,n) = O p (M N 3 ) and In general, ( ) m M m k e(m,n) = O p (M k+ N ) and e(m,n) = O p (MN). e(m,n) = O p (M 3 N ) and M se(m,n) = O p (M N 3 ). m k e(m,n) = O p (M N k+ ). ( ) n N e(m,n) = O p (M N 3 ). Now we will prove (), (3) and (7). Proof of (). From the definition of Q MN in (5), Here, Q MN = ( A = R = = A cos(λ m + µ n) + B sin(λ m + µ n) + X(m,n) ] e i(e λm+eµn) + B i ) ( A ) R + B R + R 3 (say). () i e i{(λ λ)m+(µ e eµ)n} = M + i(λ λ) m= e i(λ λ)m e m= me i(λ λ )m ] n= N + i(µ µ) e i(µ eµ)n n= ne i(µ µ )n ] ] = M + O p (M δ N δ )O p (M ) N + O p (M δ N δ )O p (N ) ] ] = M + O p (M δ N δ ) N + O p (M δ N δ ) ] = MN + O p (M δ N δ ) + O p (M δ N δ ), (3) ] λ is a point on the line joining λ and λ and µ is a point on the line joining µ and µ. Further, R = e i{(λ + λ)m+(µ e +eµ)n} = O p (), (4) and R 3 = X(m,n)e i(e λm+eµn). (5) 9

22 Now we will evaluate the order of R 3. Choose L large enough, such that L.min{δ,δ } >. We obtain the following of R 3, using the Taylor series approximation similarly as in Bai et al. ] or Nandi and Kundu 5], up to L th order terms. Here λ is a point on the line joining λ and λ. R 3 = = = = e i(λ m) k= e(m j,n j )e i(e λm+eµn) M e ieµn e(m j,n j )e ie λm n= m= M e ieµn e(m j,n j ) n= m= ] ( i( λ λ )) k m k e i(λm) + ( i( λ λ )) L m L e i(λ m), k! L! M e ieµn e(m j,n j )e i(λ m) n= k= m= ( i( λ λ )) k k! ( i( λ λ )) L L! M e ieµn e(m j,n j )m k e i(λ m) n= m= M e ieµn e(m j,n j )m L e i(λ m). n= m= Note that, e(m j,n j )m L e i(λ m) ML m= e(m j,n j ), m= since e i(λ t) =. Hence, we can write, M e(m j,n j )m L e i(λ m) = θ M L e(m j,n j ), m= for θ. Now re-arranging the terms, R 3 becomes R 3 = + + k= m= e(m j,n j )e i(λ m+eµn) ( i( λ λ )) k k! θ (M( λ λ )) L L! e(m j,n j )m k e i(λ m+eµn) e(m j,n j ) e ieµm,

23 = T + T + T 3 (say). (6) We will consider T, T and T 3 one by one, and find out their order. First, T = Expanding using Taylor s theorem, we get, T = + + = O p (M N ) + = O p (M N ). k= e(m j,n j )e i(λm+eµn). e(m j,n j )e i(λ m+µ n) ( i( µ µ ) ) k k! θ (N( µ µ )) L L! k= e(m j,n j )n k e i(λ m+µ n) e(m j,n j ) e iλ m O p (M δk N k kδ ) O p (M N k+ ) + Op (N L M Lδ N L Lδ MN) k! (7) Expanding using Taylor s theorem, we get, T = + + = k= k= k= k= ( i( λ λ )) k k! ( i( λ λ )) k k! ( i( λ λ )) k k! e(m j,n j ) m k e iλ m O p (M (+δ )k N δ k )O p (M k+ N ) s= e(m j,n j )m k e i(λ m+µ n) ( i( µ µ )) s s! θ 3(N( µ µ )) L L! + O p (M (+δ)k N δk )O p (M δs N (+δ)s ) O p (M k+ N s+ ) k= s= + k= O p (M (+δ )k N δ k ) O p (N L M Lδ N L Lδ ) O p (M k+ N) = O p (M kδ N kδ ) + O p (M kδ sδ N kδ sδ ) k= k= s= m k n s e i(λ m+µ n)

24 O p (M kδ Lδ N kδ Lδ ) + k= = O p (M δ N δ ) + O p (M δ δ 3 N δ δ 4 ) + O p (M δ Lδ N δ Lδ ) = O p (M δ N δ ). (8) Expanding using Taylor s theorem for θ i for i =,...,4, we get, T 3 = + + θ (M( λ λ )) L L! θ (M( λ λ )) L L! e(m j,n j ) n s e iµ n θ 4(N( µ µ )) L L! s= θ (M( λ λ )) L L! = O p (M Lδ N Lδ ) O p (MN) e(m j,n j ) e(m j,n j ) e iµ s ( i( µ µ )) s s! + O p (M Lδ N Lδ ) O p (M δs N (+δ)s ) O p (MN s+ ) s= + O p (M Lδ N Lδ ) O p (M Lδ N Lδ ) O p (MN) = O p (M Lδ N Lδ ) + O p (M nδ Lδ N nδ Lδ ) + O p (M Lδ Lδ N Lδ Lδ ) n= = O p (M Lδ N Lδ ). (9) Hence, from (6), using (7), (8) and (9), we have, R 3 = T + T + T 3 = O p (M N ) + Op (M δ N δ ) + O p (M Lδ N Lδ ) = O p (M N ), (3) since Lδ i > for i =,. Therefore, from (), using (3), (4) and (3) we have, ( A Q MN = + B i ( A = + B i ) ( A R + B i ) R + R 3 ) MN + O p (M δ N δ ) + O p (M δ N δ ) ]

25 ( ) A + B O p () + O p (M N ) i = MN (A ib ) ] + O p (M δ N δ ) + O p (M δ N δ ). (3) The expressions for P (λ) MN similarly. and P (µ) MN as given in (3) and (7) respectively can be obtained References ] Bai Z. D. and Rao C. R. and Chow M. and Kundu D. (3), An efficient algorithm for estimating the parameters of superimposed exponential signals, Journal of Statistical Planning and Inference, vol., ] Francos J. M. and Meiri A. Z. and Porat B. (993), A united texture model based on -D Wald like decomposition, IEEE Transaction on Signal Processing, vol. 4, ] Fuller W. A.(976), Introduction to Statistical Time Series, John Wiley and Sons, New York. 4] Kundu D. and Nandi S. (3), Determination of Discrete Spectrum in a Random Field, Statistica Neerlandica, vol. 57, ] Nandi S. and Kundu D. (6), Fast and efficient algorithm for estimating the frequencies of sinusoidal signals, Sankhya, vol. 68, ] Prasad A. and Kundu D. and Mitra A. (8), Sequential Estimation of the Parameters of Sum of Sinusoidal Model, Journal of Statistical Planning and Inference, vol. 38, ] Press W.H. and Teukolsky S.A. and Vellerling W.T. and Flannery B.P., (99), Numerical Recipes in FORTRAN, the art of scientific computing, nd. edition, Cambridge University Press, Cambridge. 3

26 8] Rao C. R. and Zhao L.C. and Zhou B. (994), Maximum likelihood estimation of the -D superimposed exponential, IEEE Transactions on Signal Processing, vol. 4, ] Rice J. A. and Rosenblatt M. (988), On Frequency Estimation, Biometrika, vol. 75, ] Zhang H. and Mandrekar V. (), Estimation of hidden frequencies for D stationary processes, Journal of Time Series Analysis, vol., Table : First Component: Algorithm Estimates. A =.5 B =.5 λ =. µ =. Time AE M= 5 MSE (.346E-) (.35E-) (.95E-6) (.896E-6) :53.8 N= 5 ASYV (.334E-) (.334E-) (.889E-6) (.889E-6) AE M= 75 MSE (.5) (.5) (.83E-6) (.73E-6) 7:3.96 N= 75 ASYV (.5) (.5) (.76E-6) (.76E-6) AE M= MSE (.86E-3) (.8E-3) (.56E-7) (.559E-7) 9:37.8 N= ASY V (.834E-3) (.834E-3) (.556E-7) (.556E-7) 4

27 Table : Second Component: Algorithm Estimates. A =. B =. λ =. µ =. Time AE M= 5 MSE (.7E-) (.75E-) (.43E-5) (.48E-5) :53.8 N= 5 ASYV (.76E-) (.76E-) (.43E-6) (.43E-6) AE M= 75 MSE (.33) (.33) (.86E-6) (.9E-6) 7:3.96 N= 75 ASYV (.3) (.3) (.849E-6) (.849E-6) AE M= MSE (.89E-) (.86E-) (.7E-6) (.59E-6) 9:37.8 N= ASYV (.79E-) (.79E-) (.69E-6) (.69E-6) Table 3: First Component: Least Squares Estimates. A =.5 B =.5 λ =. µ =. Time AE M= 5 MSE (.349E-) (.356E-) (.9E-6) (.97E-6) 5:53.8 N= 5 ASYV (.334E-) (.334E-) (.889E-6) (.889E-6) AE M= 75 MSE (.5) (.5) (.88E-6) (.78E-6) 7:3.96 N= 75 ASYV (.5) (.5) (.76E-6) (.76E-6) AE M= MSE (.9) (.9) (.68E-7) (.63E-7) 45:3. N= ASYV (.8) (.8) (.556E-7) (.556E-7) 5

28 Table 4: Second Component: Least Squares Estimates. A =. B =. λ =. µ =. Time AE M= 5 MSE (.695E-) (.744E-) (.433-5) (.47E-5) 5:53.8 N= 5 ASYV (.76E-) (.76E-) (.43E-6) (.43E-6) AE M= 75 MSE (.33) (.36) (.84E-6) (.98E-6) 7:3.96 N= 75 ASYV (.3) (.3) (.849E-6) (.849E-6) AE M= MSE (.9) (.9) (.73E-6) (.63E-6) 45:3. N= ASYV (.7) (.7) (.69E-7) (.69E-7) The average estimates and the MSEs are reported for each parameter. The first row represents the true parameter values. In each box corresponding to each sample size, the first row represents the average estimates, the corresponding MSEs and the asymptotic variances (ASYV) are reported below within brackets. 6

Noise Space Decomposition Method for two dimensional sinusoidal model

Noise Space Decomposition Method for two dimensional sinusoidal model Noise Space Decomposition Method for two dimensional sinusoidal model Swagata Nandi & Debasis Kundu & Rajesh Kumar Srivastava Abstract The estimation of the parameters of the two dimensional sinusoidal

More information

An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals

An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals Swagata Nandi 1 Debasis Kundu Abstract A computationally efficient algorithm is proposed for estimating the parameters

More information

EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL

EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 Abstract. Chirp signals play an important role in the statistical signal processing.

More information

On Parameter Estimation of Two Dimensional Chirp Signal

On Parameter Estimation of Two Dimensional Chirp Signal On Parameter Estimation of Two Dimensional Chirp Signal Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract Two dimensional (-D) chirp signals occur in different areas of image processing. In this paper,

More information

Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal

Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Rhythm Grover, Debasis Kundu,, and Amit Mitra Department of Mathematics, Indian Institute of Technology Kanpur,

More information

ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL

ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL 1 ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL SWAGATA NANDI 1 AND DEBASIS KUNDU Abstract. In this paper, we propose a modification of the multiple sinusoidal model such that periodic

More information

On Two Different Signal Processing Models

On Two Different Signal Processing Models On Two Different Signal Processing Models Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 15, 2015 Outline First Model 1 First Model 2 3 4 5 6 Outline First Model 1

More information

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE DEBASIS KUNDU AND SWAGATA NANDI Abstract. The problem of parameter estimation of the chirp signals in presence of stationary noise

More information

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Statistica Sinica 8(008), 87-0 PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Debasis Kundu and Swagata Nandi Indian Institute of Technology, Kanpur and Indian Statistical Institute

More information

Amplitude Modulated Model For Analyzing Non Stationary Speech Signals

Amplitude Modulated Model For Analyzing Non Stationary Speech Signals Amplitude Modulated Model For Analyzing on Stationary Speech Signals Swagata andi, Debasis Kundu and Srikanth K. Iyer Institut für Angewandte Mathematik Ruprecht-Karls-Universität Heidelberg Im euenheimer

More information

Analysis of Type-II Progressively Hybrid Censored Data

Analysis of Type-II Progressively Hybrid Censored Data Analysis of Type-II Progressively Hybrid Censored Data Debasis Kundu & Avijit Joarder Abstract The mixture of Type-I and Type-II censoring schemes, called the hybrid censoring scheme is quite common in

More information

Estimating Periodic Signals

Estimating Periodic Signals Department of Mathematics & Statistics Indian Institute of Technology Kanpur Most of this talk has been taken from the book Statistical Signal Processing, by D. Kundu and S. Nandi. August 26, 2012 Outline

More information

ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS

ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS Debasis Kundu Department of Mathematics Indian Institute of Technology Kanpur Kanpur, Pin 20806 India Abstract:

More information

On Least Absolute Deviation Estimators For One Dimensional Chirp Model

On Least Absolute Deviation Estimators For One Dimensional Chirp Model On Least Absolute Deviation Estimators For One Dimensional Chirp Model Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract It is well known that the least absolute deviation (LAD) estimators are more

More information

LIST OF PUBLICATIONS

LIST OF PUBLICATIONS LIST OF PUBLICATIONS Papers in referred journals [1] Estimating the ratio of smaller and larger of two uniform scale parameters, Amit Mitra, Debasis Kundu, I.D. Dhariyal and N.Misra, Journal of Statistical

More information

Analysis of Middle Censored Data with Exponential Lifetime Distributions

Analysis of Middle Censored Data with Exponential Lifetime Distributions Analysis of Middle Censored Data with Exponential Lifetime Distributions Srikanth K. Iyer S. Rao Jammalamadaka Debasis Kundu Abstract Recently Jammalamadaka and Mangalam (2003) introduced a general censoring

More information

Asymptotic properties of the least squares estimators of a two dimensional model

Asymptotic properties of the least squares estimators of a two dimensional model Metrika (998) 48: 83±97 > Springer-Verlag 998 Asymptotic properties of the least squares estimators of a two dimensional model Debasis Kundu,*, Rameshwar D. Gupta,** Department of Mathematics, Indian Institute

More information

Determination of Discrete Spectrum in a Random Field

Determination of Discrete Spectrum in a Random Field 58 Statistica Neerlandica (003) Vol. 57, nr., pp. 58 83 Determination of Discrete Spectrum in a Random Field Debasis Kundu Department of Mathematics, Indian Institute of Technology Kanpur, Kanpur - 0806,

More information

Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes

Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes A. Ganguly 1, D. Kundu 2,3, S. Mitra 2 Abstract Step-stress model is becoming quite popular in recent times for analyzing lifetime

More information

Weighted Marshall-Olkin Bivariate Exponential Distribution

Weighted Marshall-Olkin Bivariate Exponential Distribution Weighted Marshall-Olkin Bivariate Exponential Distribution Ahad Jamalizadeh & Debasis Kundu Abstract Recently Gupta and Kundu [9] introduced a new class of weighted exponential distributions, and it can

More information

Bayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring

Bayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring Bayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring Debasis KUNDU Department of Mathematics and Statistics Indian Institute of Technology Kanpur Pin

More information

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

THE PROBLEM of estimating the parameters of a twodimensional

THE PROBLEM of estimating the parameters of a twodimensional IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 46, NO 8, AUGUST 1998 2157 Parameter Estimation of Two-Dimensional Moving Average Rom Fields Joseph M Francos, Senior Member, IEEE, Benjamin Friedler, Fellow,

More information

Estimation for Mean and Standard Deviation of Normal Distribution under Type II Censoring

Estimation for Mean and Standard Deviation of Normal Distribution under Type II Censoring Communications for Statistical Applications and Methods 2014, Vol. 21, No. 6, 529 538 DOI: http://dx.doi.org/10.5351/csam.2014.21.6.529 Print ISSN 2287-7843 / Online ISSN 2383-4757 Estimation for Mean

More information

Research Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation

Research Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation Probability and Statistics Volume 2012, Article ID 807045, 5 pages doi:10.1155/2012/807045 Research Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation

More information

Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions

Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Arabin Kumar Dey & Debasis Kundu Abstract Recently Kundu and Gupta ( Bivariate generalized exponential distribution,

More information

Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution

Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution Biswabrata Pradhan & Debasis Kundu Abstract In this article the Bayes estimates of two-parameter gamma distribution is considered.

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Time series models in the Frequency domain. The power spectrum, Spectral analysis

Time series models in the Frequency domain. The power spectrum, Spectral analysis ime series models in the Frequency domain he power spectrum, Spectral analysis Relationship between the periodogram and the autocorrelations = + = ( ) ( ˆ α ˆ ) β I Yt cos t + Yt sin t t= t= ( ( ) ) cosλ

More information

Frequency estimation by DFT interpolation: A comparison of methods

Frequency estimation by DFT interpolation: A comparison of methods Frequency estimation by DFT interpolation: A comparison of methods Bernd Bischl, Uwe Ligges, Claus Weihs March 5, 009 Abstract This article comments on a frequency estimator which was proposed by [6] and

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

A New Two Sample Type-II Progressive Censoring Scheme

A New Two Sample Type-II Progressive Censoring Scheme A New Two Sample Type-II Progressive Censoring Scheme arxiv:609.05805v [stat.me] 9 Sep 206 Shuvashree Mondal, Debasis Kundu Abstract Progressive censoring scheme has received considerable attention in

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

F & B Approaches to a simple model

F & B Approaches to a simple model A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys

More information

On Sarhan-Balakrishnan Bivariate Distribution

On Sarhan-Balakrishnan Bivariate Distribution J. Stat. Appl. Pro. 1, No. 3, 163-17 (212) 163 Journal of Statistics Applications & Probability An International Journal c 212 NSP On Sarhan-Balakrishnan Bivariate Distribution D. Kundu 1, A. Sarhan 2

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

Hybrid Censoring; An Introduction 2

Hybrid Censoring; An Introduction 2 Hybrid Censoring; An Introduction 2 Debasis Kundu Department of Mathematics & Statistics Indian Institute of Technology Kanpur 23-rd November, 2010 2 This is a joint work with N. Balakrishnan Debasis Kundu

More information

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Spall John Wiley and Sons, Inc., 2003 Preface... xiii 1. Stochastic Search

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Progress In Electromagnetics Research, PIER 102, 31 48, 2010

Progress In Electromagnetics Research, PIER 102, 31 48, 2010 Progress In Electromagnetics Research, PIER 102, 31 48, 2010 ACCURATE PARAMETER ESTIMATION FOR WAVE EQUATION F. K. W. Chan, H. C. So, S.-C. Chan, W. H. Lau and C. F. Chan Department of Electronic Engineering

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

1 Introduction. 2 Binary shift map

1 Introduction. 2 Binary shift map Introduction This notes are meant to provide a conceptual background for the numerical construction of random variables. They can be regarded as a mathematical complement to the article [] (please follow

More information

On the Hazard Function of Birnbaum Saunders Distribution and Associated Inference

On the Hazard Function of Birnbaum Saunders Distribution and Associated Inference On the Hazard Function of Birnbaum Saunders Distribution and Associated Inference Debasis Kundu, Nandini Kannan & N. Balakrishnan Abstract In this paper, we discuss the shape of the hazard function of

More information

A Convenient Way of Generating Gamma Random Variables Using Generalized Exponential Distribution

A Convenient Way of Generating Gamma Random Variables Using Generalized Exponential Distribution A Convenient Way of Generating Gamma Random Variables Using Generalized Exponential Distribution Debasis Kundu & Rameshwar D. Gupta 2 Abstract In this paper we propose a very convenient way to generate

More information

A note on multiple imputation for general purpose estimation

A note on multiple imputation for general purpose estimation A note on multiple imputation for general purpose estimation Shu Yang Jae Kwang Kim SSC meeting June 16, 2015 Shu Yang, Jae Kwang Kim Multiple Imputation June 16, 2015 1 / 32 Introduction Basic Setup Assume

More information

IN this paper, we consider the estimation of the frequency

IN this paper, we consider the estimation of the frequency Iterative Frequency Estimation y Interpolation on Fourier Coefficients Elias Aoutanios, MIEEE, Bernard Mulgrew, MIEEE Astract The estimation of the frequency of a complex exponential is a prolem that is

More information

Econ 583 Final Exam Fall 2008

Econ 583 Final Exam Fall 2008 Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random

More information

Approximation of Average Run Length of Moving Sum Algorithms Using Multivariate Probabilities

Approximation of Average Run Length of Moving Sum Algorithms Using Multivariate Probabilities Syracuse University SURFACE Electrical Engineering and Computer Science College of Engineering and Computer Science 3-1-2010 Approximation of Average Run Length of Moving Sum Algorithms Using Multivariate

More information

A LIMIT THEOREM FOR THE IMBALANCE OF A ROTATING WHEEL

A LIMIT THEOREM FOR THE IMBALANCE OF A ROTATING WHEEL Sankhyā : The Indian Journal of Statistics 1994, Volume 56, Series B, Pt. 3, pp. 46-467 A LIMIT THEOREM FOR THE IMBALANCE OF A ROTATING WHEEL By K.G. RAMAMURTHY and V. RAJENDRA PRASAD Indian Statistical

More information

arxiv: v2 [math.st] 20 Jun 2014

arxiv: v2 [math.st] 20 Jun 2014 A solution in small area estimation problems Andrius Čiginas and Tomas Rudys Vilnius University Institute of Mathematics and Informatics, LT-08663 Vilnius, Lithuania arxiv:1306.2814v2 [math.st] 20 Jun

More information

Non-linear least-squares and chemical kinetics. An improved method to analyse monomer-excimer decay data

Non-linear least-squares and chemical kinetics. An improved method to analyse monomer-excimer decay data Journal of Mathematical Chemistry 21 (1997) 131 139 131 Non-linear least-squares and chemical kinetics. An improved method to analyse monomer-excimer decay data J.P.S. Farinha a, J.M.G. Martinho a and

More information

Likelihood-Based Methods

Likelihood-Based Methods Likelihood-Based Methods Handbook of Spatial Statistics, Chapter 4 Susheela Singh September 22, 2016 OVERVIEW INTRODUCTION MAXIMUM LIKELIHOOD ESTIMATION (ML) RESTRICTED MAXIMUM LIKELIHOOD ESTIMATION (REML)

More information

arxiv: v4 [stat.co] 18 Mar 2018

arxiv: v4 [stat.co] 18 Mar 2018 An EM algorithm for absolute continuous bivariate Pareto distribution Arabin Kumar Dey, Biplab Paul and Debasis Kundu arxiv:1608.02199v4 [stat.co] 18 Mar 2018 Abstract: Recently [3] used EM algorithm to

More information

Gradient-based Adaptive Stochastic Search

Gradient-based Adaptive Stochastic Search 1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction

More information

An Extension of the Generalized Exponential Distribution

An Extension of the Generalized Exponential Distribution An Extension of the Generalized Exponential Distribution Debasis Kundu and Rameshwar D. Gupta Abstract The two-parameter generalized exponential distribution has been used recently quite extensively to

More information

An Empirical Algorithm for Relative Value Iteration for Average-cost MDPs

An Empirical Algorithm for Relative Value Iteration for Average-cost MDPs 2015 IEEE 54th Annual Conference on Decision and Control CDC December 15-18, 2015. Osaka, Japan An Empirical Algorithm for Relative Value Iteration for Average-cost MDPs Abhishek Gupta Rahul Jain Peter

More information

EIE6207: Estimation Theory

EIE6207: Estimation Theory EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

Tutorial lecture 2: System identification

Tutorial lecture 2: System identification Tutorial lecture 2: System identification Data driven modeling: Find a good model from noisy data. Model class: Set of all a priori feasible candidate systems Identification procedure: Attach a system

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

Analysis of incomplete data in presence of competing risks

Analysis of incomplete data in presence of competing risks Journal of Statistical Planning and Inference 87 (2000) 221 239 www.elsevier.com/locate/jspi Analysis of incomplete data in presence of competing risks Debasis Kundu a;, Sankarshan Basu b a Department

More information

Analysis of Gamma and Weibull Lifetime Data under a General Censoring Scheme and in the presence of Covariates

Analysis of Gamma and Weibull Lifetime Data under a General Censoring Scheme and in the presence of Covariates Communications in Statistics - Theory and Methods ISSN: 0361-0926 (Print) 1532-415X (Online) Journal homepage: http://www.tandfonline.com/loi/lsta20 Analysis of Gamma and Weibull Lifetime Data under a

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

Theory of Statistics.

Theory of Statistics. Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased

More information

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS Y. ZHANG and A.I. MCLEOD Acadia University and The University of Western Ontario May 26, 2005 1 Abstract. A symbolic

More information

On Bivariate Birnbaum-Saunders Distribution

On Bivariate Birnbaum-Saunders Distribution On Bivariate Birnbaum-Saunders Distribution Debasis Kundu 1 & Ramesh C. Gupta Abstract Univariate Birnbaum-Saunders distribution has been used quite effectively to analyze positively skewed lifetime data.

More information

Modeling Multiscale Differential Pixel Statistics

Modeling Multiscale Differential Pixel Statistics Modeling Multiscale Differential Pixel Statistics David Odom a and Peyman Milanfar a a Electrical Engineering Department, University of California, Santa Cruz CA. 95064 USA ABSTRACT The statistics of natural

More information

A Class of Weighted Weibull Distributions and Its Properties. Mervat Mahdy Ramadan [a],*

A Class of Weighted Weibull Distributions and Its Properties. Mervat Mahdy Ramadan [a],* Studies in Mathematical Sciences Vol. 6, No. 1, 2013, pp. [35 45] DOI: 10.3968/j.sms.1923845220130601.1065 ISSN 1923-8444 [Print] ISSN 1923-8452 [Online] www.cscanada.net www.cscanada.org A Class of Weighted

More information

A comparison of inverse transform and composition methods of data simulation from the Lindley distribution

A comparison of inverse transform and composition methods of data simulation from the Lindley distribution Communications for Statistical Applications and Methods 2016, Vol. 23, No. 6, 517 529 http://dx.doi.org/10.5351/csam.2016.23.6.517 Print ISSN 2287-7843 / Online ISSN 2383-4757 A comparison of inverse transform

More information

Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution

Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution Journal of Probability and Statistical Science 14(), 11-4, Aug 016 Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution Teerawat Simmachan

More information

Estimation in an Exponentiated Half Logistic Distribution under Progressively Type-II Censoring

Estimation in an Exponentiated Half Logistic Distribution under Progressively Type-II Censoring Communications of the Korean Statistical Society 2011, Vol. 18, No. 5, 657 666 DOI: http://dx.doi.org/10.5351/ckss.2011.18.5.657 Estimation in an Exponentiated Half Logistic Distribution under Progressively

More information

Hacettepe Journal of Mathematics and Statistics Volume 45 (5) (2016), Abstract

Hacettepe Journal of Mathematics and Statistics Volume 45 (5) (2016), Abstract Hacettepe Journal of Mathematics and Statistics Volume 45 (5) (2016), 1605 1620 Comparing of some estimation methods for parameters of the Marshall-Olkin generalized exponential distribution under progressive

More information

Statistical methods and data analysis

Statistical methods and data analysis Statistical methods and data analysis Teacher Stefano Siboni Aim The aim of the course is to illustrate the basic mathematical tools for the analysis and modelling of experimental data, particularly concerning

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) 1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise

More information

Use of the Autocorrelation Function for Frequency Stability Analysis

Use of the Autocorrelation Function for Frequency Stability Analysis Use of the Autocorrelation Function for Frequency Stability Analysis W.J. Riley, Hamilton Technical Services Introduction This paper describes the use of the autocorrelation function (ACF) as a complement

More information

Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh

Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh Journal of Statistical Research 007, Vol. 4, No., pp. 5 Bangladesh ISSN 056-4 X ESTIMATION OF AUTOREGRESSIVE COEFFICIENT IN AN ARMA(, ) MODEL WITH VAGUE INFORMATION ON THE MA COMPONENT M. Ould Haye School

More information

Improving Convergence of Iterative Feedback Tuning using Optimal External Perturbations

Improving Convergence of Iterative Feedback Tuning using Optimal External Perturbations Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-, 2008 Improving Convergence of Iterative Feedback Tuning using Optimal External Perturbations Jakob Kjøbsted Huusom,

More information

A-efficient balanced treatment incomplete block designs

A-efficient balanced treatment incomplete block designs isid/ms/2002/25 October 4, 2002 http://www.isid.ac.in/ statmath/eprints A-efficient balanced treatment incomplete block designs Ashish Das Aloke Dey Sanpei Kageyama and Kishore Sinha Indian Statistical

More information

Department of Mathematics, Indian Institute of Technology, Kanpur

Department of Mathematics, Indian Institute of Technology, Kanpur This article was downloaded by: [SENACYT Consortium - trial account] On: 24 November 2009 Access details: Access Details: [subscription number 910290633] Publisher Taylor & Francis Informa Ltd Registered

More information

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University Topic 4 Unit Roots Gerald P. Dwyer Clemson University February 2016 Outline 1 Unit Roots Introduction Trend and Difference Stationary Autocorrelations of Series That Have Deterministic or Stochastic Trends

More information

Degrees-of-freedom estimation in the Student-t noise model.

Degrees-of-freedom estimation in the Student-t noise model. LIGO-T0497 Degrees-of-freedom estimation in the Student-t noise model. Christian Röver September, 0 Introduction The Student-t noise model was introduced in [] as a robust alternative to the commonly used

More information

Lecture Notes. Introduction

Lecture Notes. Introduction 5/3/016 Lecture Notes R. Rekaya June 1-10, 016 Introduction Variance components play major role in animal breeding and genetic (estimation of BVs) It has been an active area of research since early 1950

More information

A Gaussian state-space model for wind fields in the North-East Atlantic

A Gaussian state-space model for wind fields in the North-East Atlantic A Gaussian state-space model for wind fields in the North-East Atlantic Julie BESSAC - Université de Rennes 1 with Pierre AILLIOT and Valï 1 rie MONBET 2 Juillet 2013 Plan Motivations 1 Motivations 2 Context

More information

Measurement Error and Linear Regression of Astronomical Data. Brandon Kelly Penn State Summer School in Astrostatistics, June 2007

Measurement Error and Linear Regression of Astronomical Data. Brandon Kelly Penn State Summer School in Astrostatistics, June 2007 Measurement Error and Linear Regression of Astronomical Data Brandon Kelly Penn State Summer School in Astrostatistics, June 2007 Classical Regression Model Collect n data points, denote i th pair as (η

More information

Wright-Fisher Models, Approximations, and Minimum Increments of Evolution

Wright-Fisher Models, Approximations, and Minimum Increments of Evolution Wright-Fisher Models, Approximations, and Minimum Increments of Evolution William H. Press The University of Texas at Austin January 10, 2011 1 Introduction Wright-Fisher models [1] are idealized models

More information

Analysis Methods for Supersaturated Design: Some Comparisons

Analysis Methods for Supersaturated Design: Some Comparisons Journal of Data Science 1(2003), 249-260 Analysis Methods for Supersaturated Design: Some Comparisons Runze Li 1 and Dennis K. J. Lin 2 The Pennsylvania State University Abstract: Supersaturated designs

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Geometric Skew-Normal Distribution

Geometric Skew-Normal Distribution Debasis Kundu Arun Kumar Chair Professor Department of Mathematics & Statistics Indian Institute of Technology Kanpur Part of this work is going to appear in Sankhya, Ser. B. April 11, 2014 Outline 1 Motivation

More information

Bootstrap inference for the finite population total under complex sampling designs

Bootstrap inference for the finite population total under complex sampling designs Bootstrap inference for the finite population total under complex sampling designs Zhonglei Wang (Joint work with Dr. Jae Kwang Kim) Center for Survey Statistics and Methodology Iowa State University Jan.

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation

Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation H. Zhang, E. Cutright & T. Giras Center of Rail Safety-Critical Excellence, University of Virginia,

More information

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Method of Conditional Moments Based on Incomplete Data

Method of Conditional Moments Based on Incomplete Data , ISSN 0974-570X (Online, ISSN 0974-5718 (Print, Vol. 20; Issue No. 3; Year 2013, Copyright 2013 by CESER Publications Method of Conditional Moments Based on Incomplete Data Yan Lu 1 and Naisheng Wang

More information