Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal

Size: px
Start display at page:

Download "Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal"

Transcription

1 Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Rhythm Grover, Debasis Kundu,, and Amit Mitra Department of Mathematics, Indian Institute of Technology Kanpur, Kanpur , India Corresponding author. kundu@iitk.ac.in In this paper, we address the problem of parameter estimation of a -D chirp model under the assumption that the errors are stationary. We extend the - D periodogram method for the sinusoidal model, to find initial values to use in any iterative procedure to compute the least squares estimators (LSEs of the unknown parameters, to the -D chirp model. Next we propose an estimator, known as the approximate least squares estimator (ALSE, that is obtained by maximising a periodogram-type function and is observed to be asymptotically equivalent to the LSE. Moreover the asymptotic properties of these estimators are obtained under slightly mild conditions than those required for the LSEs. For the multiple component -D chirp model, we propose a sequential method of estimation of the, that significantly reduces the computational difficulty involved in reckoning the LSEs and the. We perform some simulation studies to see how the proposed method works and a data set has been analysed for illustrative purposes. Key Words and Phrases: Least squares estimators; chirp model, non-linear regression; asymptotic normal.

2 Introduction A two dimensional chirp signal model is expressed mathematically as follows: y(m, n = A 0 cos(α 0 m + β 0 m + γ 0 n + δ 0 n + B 0 sin(α 0 m + β 0 m + γ 0 n + δ 0 n + X(m, n; m =,, M; n =,, N. ( Here y(m, ns are the signal observations, A 0, B 0 are real valued, non-zero amplitudes and {α 0, γ 0 } and {β 0, δ 0 } are the frequencies and the frequency rates, respectively. The random variables {X(m, n} is a sequence of stationary errors. The explicit assumptions on the error structure are provided in section. The above model has been considered in many areas of image processing, particularly in modeling gray images. Several estimation techniques for the unknown parameters of this model have been considered by different authors, for instance, Friedlander and Francos [7], Francos and Friedlander [5], [6], Lahiri [0], [] and the references cited therein. Our goal is to estimate the unknown parameters of the above model, primarily the non-linear parameters, the frequencies α 0, γ 0 and the frequency rates β 0, δ 0, under certain suitable assumptions. One of the straightforward and efficient ways to do so is to use the least squares estimation method. But since the least squares surface is highly non-linear and iterative methods must be employed for their computation, for these methods to work, we need good starting points for the unknown parameters. One of the fundamental models in statistical signal processing literature, among the -D models, is the -D sinusoidal model. This model has different applications in many fields such as biomedical spectral analysis, geophysical perception etc. For references, see Barbieri and Barone [], Cabrera and Bose [3], Hua [8], Zhang and Mandrekar [7], Prasad et al, [3], Nandi et al, [] and Kundu and Nandi [9]. A -D sinusoidal model has the following mathematical expression: y(m, n = A 0 cos(mλ 0 + nµ 0 + B 0 sin(mλ 0 + nµ 0 + X(m, n m =,, M; n =,, N. For this model as well, the least squares surface is highly non-linear and thus we need good initial values, for any iterative procedure to work. One of the most prevalent

3 3 methods to find the initial guesses for the -D sinusoidal model are the periodogram estimators. These are obtained by maximizing a -D periodogram function, which is defined as follows: I(λ, µ = M N y(m, ne i(mλ+nµ m= n= ( πk This periodogram function is maximized over -D Fourier frequencies, that is, at M, πj, N for k =,, M, and j =,, N. The estimators that are obtained by maximising the above periodogram function with respect to λ and µ simultaneously over the continuous space (0, π (0, π, are known as the approximate least squares estimators (. Kundu and Nandi, [9] proved that the are consistent and asymptotically equivalent to the least squares estimators (LSEs. Analogously, we define a periodogram-type function for the -D chirp model defined in equation (, as follows: I(α, β, γ, δ = M N y(m, ne i(αm+βm +γn+δn. ( m= n= To find the initial values, we propose to maximise the above function at the grid points ( πk M, πk M, πj N, πj, k N =,, M, k =,, M, j =,, N, and j =, N, corresponding to the Fourier frequencies of the -D sinusoidal model. These starting values can be used in any iterative procedure, to compute the LSEs and. Next we propose to estimate the unknown parameters of model ( by approximate least squares estimation method. In this method, we maximize the periodogram-like function I(α, β, γ, δ defined above, with respect to α, β, γ and δ simultaneously, over (0, π (0, π (0, π (0, π. The details on the methodology to obtain the are further explained in section 3. We prove that these estimators are strongly consistent and asymptotically normally distributed under the assumptions, that are slightly mild than those required for the LSEs. Also, the convergence rates of the are same as those of the LSEs. The rest of the paper is organized as follows. In the next section we state the model assumptions, some notations and some preliminary results required. In section 3, we

4 4 give a brief description of the methodology. In section 4, we study the asymptotic properties of one component -D chirp model and in section 5, we propose a sequential method to obtain the LSEs and for the multicomponent -D chirp model and study their asymptotic properties. Numerical experiments and a simulated data analysis are illustrated in sections 6 and 7. In section 8, we conclude the paper. All the proofs are provided in the appendices. Model Assumptions, Notations and Preliminary Results Assumption. The error X(m, n is stationary with the following form: X(m, n = a(j, kɛ(m j, n k, j= k= where {ɛ(m, n} is a double array sequence of independently and identically distributed (i.i.d. random variables with mean zero, variance σ and finite fourth moment, and a(j, ks are real constants such that a(j, k <. j= k= We will use the following notation: θ = (A, B, α, β, γ, δ, the parameter vector, θ 0 = (A 0, B 0, α 0, β 0, γ 0, δ 0, the true parameter vector, Θ = (, (, (0, π (0, π (0, π (0, π, the parameter space. Also, ϑ = (α, β, γ, δ, a vector of the non-linear parameters. Assumption. The true parameter vector θ 0 is an interior point of Θ. Note that the assumptions required to prove strong consistency of the LSEs of the unknown parameters in this case are slightly different from those required to prove the consistency of. For the LSEs the parametric space for the linear parameters has to be bounded, though here we do not require that bound. For details on the assumptions for the consistency of the LSEs, see Lahiri [0]. We need the following results to proceed further:

5 5 Lemma. If (ω, ω, ψ, ψ (0, π (0, π (0, π (0, π, then except for a countable number of points, and for s, t = 0,,, the following are true: (a (b lim min{m,n} lim min{m,n} (c (d (e (f N M N lim min{m,n} lim min{m,n} M N N cos(ωm +ψn = cos (ωm +ψn = M M lim min{m,n} M (s+ N (t+ lim min{m,n} M (s+ N (t+ lim min{m,n} lim min{m,n} N M N n= M cos(ω m + ω m + ψ n + ψ n = 0, sin(ω m + ω m + ψ n + ψ n = 0, sin(ωm +ψn = 0 m= sin (ωm +ψn = m s n t cos (ω m + ω m + ψ n + ψ n = (s+(t+, m s n t sin (ω m + ω m + ψ n + ψ n = (s+(t+, (g lim sup min{m,n} M α,β,γ,δ (s+ N (t+ m s n t X(m, ne i(αm+βm +γn+δn 0 a.s. Proof. Refer to Lahiri [0] Lemma. If (ω, ψ (0, π (0, π, then except for a countable number of points, the following holds true: lim n n k n n t= t k cos(ωt + ψt = n lim n k n n t k sin(ωt + ψt = 0; k = 0,,, t= Proof. Refer to Lahiri [0]. Lemma 3. If (ω, ω, ω 3, ω 4 (0, π (0, π (0, π (0, π and (ψ, ψ, ψ 3, ψ 4 (0, π (0, π (0, π (0, π, then except for a countable number of points, and for s, t = 0,,, the following are true:

6 (a (b (c lim min{m,n} lim min{m,n} lim min{m,n} M s N t M s N t M s N t 6 m s n t cos(ω m + ω m + ω 3 n + ω 4 n cos(ψ m + ψ m + ψ 3 n + ψ 4 n = 0, m s n t sin(ω m + ω m + ω 3 n + ω 4 n sin(ψ m + ψ m + ψ 3 n + ψ 4 n = 0, m s n t sin(ω m + ω m + ω 3 n + ω 4 n cos(ψ m + ψ m + ψ 3 n + ψ 4 n = 0. Proof. See Appendix D. 3 Method to obtain Consider the periodogram-like function defined in (. In matrix notation, it can be written as: I(ϑ = Y T W (ϑw (ϑ T Y. Here, Y = [ y(, y(m, y(, N y(m, N data vector, and ] T is the observed cos(α + β + γ + δ sin(α + β + γ + δ cos(α + 4β + γ + δ sin(α + 4β + γ + δ.. cos(mα + M β + γ + δ sin(mα + M β + γ + δ W (ϑ =.. cos(α + β + Nγ + N δ sin(α + β + Nγ + N δ cos(α + 4β + Nγ + N δ sin(α + 4β + Nγ + N δ.. cos(mα + M β + Nγ + N δ sin(mα + M β + Nγ + N δ

7 7 In matrix notation, equation (, can be written as: Y = W (ϑφ + X, where X = [ ] T X(, X(M, X(, N X(M, N is the error vector, and φ = [ ] T A B. The estimators obtained by maximising the function I(ϑ are known as the approximate least squares estimators (. We will show that the estimators obtained by maximising I(ϑ are asymptotically equivalent to the estimators obtained by minimising the error sum of squares function, that is the LSEs, and hence the former are termed as the. To do so, we require the following lemma: Lemma 4. For ϑ (0, π (0, π (0, π (0, π, except for a countable number of points, we have the following result: W (ϑt W (ϑ / 0 0 /. Proof. Consider the following: where, W (ϑt W (ϑ = Ω Ω, Ω Ω Ω = cos (αm + βm + γn + δn, Ω = cos(αm + βm + γn + δn sin(αm + βm + γn + δn, Ω = cos(αm + βm + γn + δn sin(αm + βm + γn + δn, Ω = sin (αm + βm + γn + δn. Now using Lemma (c, (e and (f, it can be easily seen that the matrix on the right hand side of the above equation tends to / 0, except for a countable number 0 / of points and hence the result.

8 8 We know that to find the LSEs, we minimise the following error sum of squares: with respect to θ. Q(θ = (Y W (ϑφ T (Y W (ϑφ (3 If we fix ϑ, then the estimates of the linear parameters can be obtained by separable regression technique of Richards [5] by minimizing Q(θ with respect to A and B. Thus the estimate of φ 0 = [ A 0 ˆφ(ϑ = Â(ϑ ˆB(ϑ Substituting Â(ϑ and ˆB(ϑ in (3, we have: B 0 ] T is given by: = (W (ϑ T W (ϑ W (ϑ T Y. (4 Q(Â(ϑ, ˆB(ϑ, ϑ = Y T (I W (ϑ(w (ϑ T W (ϑ W (ϑ T Y. Using Lemma 5, we have the following relationship between the function Q(θ and the periodogram-like function I(ϑ: Q(Â(ϑ, ˆB(ϑ, ϑ = Y T Y I(ϑ + o(. Here, a function f is o(, if f 0 as min{m, N} Thus, ˆϑ that minimises Q(Â(ϑ, ˆB(ϑ, ϑ is equivalent to ϑ, which maximises I(ϑ. 4 Asymptotic Properties of In this section, we study the asymptotic properties of the proposed estimators, the of model (. The following theorem states the result on the consistency property of the. Theorem. If the assumptions and are satisfied, then θ = (Ã, B, α, β, γ, δ, the ALSE of θ 0, is a strongly consistent estimator of θ 0 a.s., that is, θ θ 0 as min{m, N}. Proof. See Appendix A. In the following theorem, we state the result obtained on the asymptotic distribution of the proposed estimators.

9 9 Theorem. If the assumptions and are true, then the asymptotic distribution of ( θ θ 0 D is same as that of (ˆθ θ 0 D as min{m, N}, where θ = (Ã, B, α, β, γ, δ is the ALSE of θ 0 and ˆθ = (Â, ˆB, ˆα, ˆβ, ˆγ, ˆδ is the LSE of θ 0 and D is a 6 6 diagonal matrix defined as: D = diag(m N, M N, M 3 N, M 5 N, M N 3, M N 5. Proof. See Appendix B. 5 Multiple Component -D Chirp Model In this section, we consider a -D chirp model with multiple components, mathematically expressed in the following form: y(m, n = p k= ( A 0 k cos(αkm 0 + βkm 0 + γkn 0 + δkn 0 + Bk 0 sin(αkm 0 + βkm 0 + γkn 0 + δkn 0 + X(m, n; m =,, M; n =,, N. Here y(m, n is the observed data vector, A 0 ks, B 0 ks are the amplitudes, α 0 ks, γ 0 ks are the frequencies and the β 0 ks, δ 0 ks are the frequency rates. The random variables sequence {X(m, n} is a stationary error sequence. In practice, the number of components, p is unknown and its estimation is an important and still an open problem. For recent references on this model, see Zhang et al. [8] and Lahiri [0]. Here it is assumed that p is known and our main purpose is to estimate the unknown parameters of this model, primarily the non-linear parameters. (5 Finding the for the above model is computationally challenging, especially when the number of components, p is large. Even when p =, we need to solve a -D optimisation problem to obtain the. Thus, we propose a sequential procedure to find these estimates. This method reduces the complexity of computation without compromising on the efficiency of the estimators. We prove that the obtained by the proposed sequential procedure are strongly consistent and have the same rates of convergence as the LSEs.

10 0 In the following subsection, we provide the algorithm to obtain the sequential of the unknown parameters of the p component -D chirp signal. Let us denote ϑ k = (α k, β k, γ k, δ k. 5. Algorithm to find the : Step : Maximizing the periodogram-like function I (ϑ = ( M N + y(m, n cos(αm + βm + γn + δn m= n= ( M N (6. y(m, n sin(αm + βm + γn + δn m= n= We first obtain the non-linear parameter estimates: ϑ = ( α, β, γ, δ. Then the linear parameter estimates can be obtained by substituting ϑ in (4. Thus à = B = y(m, n cos( α m + β m + γ n + δ n, y(m, n sin( α m + β m + γ n + δ n. (7 Step : Now we have the estimates of the parameters of the first component of the observed signal. We subtract the contribution of the first component from the original signal vector Y to eliminate the effect of the first component and obtain a new data vector, say Y = Y W ( ϑ Ã. Step 3: Now we compute ϑ = ( α, β, γ, δ by maximizing I (ϑ which is obtained by replacing the original data vector by the new data vector in (6 and then the linear parameters, à and B can be obtained by substituting ϑ in (4. B Step 4: Continue the process upto p-steps.

11 5. Asymptotic Properties Further assumptions required to study the consistency property and derive the asymptotic distribution of the proposed estimators, are stated as follows: Assumption 3. θ 0 k is an interior point of Θ, for all k =,..., p and the frequencies αks, 0 γks 0 and the frequency rates βks, 0 δks 0 are such that (αi 0, βi 0, γi 0, δi 0 (αj, 0 βj 0, γj 0, δj 0 i j. Assumption 4. A 0 ks and B 0 ks satisfy the following relationship: > A 0 + B 0 > A 0 + B 0 > > A 0 p + B 0 p > 0. In the following theorems, we state the results we obtained on the consistency of the proposed estimators. Theorem 3. Under the assumptions, 3 and 4, Ã, B, α, β, γ and δ are strongly consistent estimators of A 0, B, 0 α, 0 β, 0 γ, 0 δ 0 a.s. respectively, that is, θ θ 0 as min{m, N}. Proof. See Appendix C. Theorem 4. If the assumptions, 3 and 4 are satisfied and p,then θ min{m, N}. a.s. θ 0 as Proof. See Appendix C. The result obtained in the above theorem can be extended upto the p-th step. Thus for any k p, the obtained at the k-th step are strongly consistent. Theorem 5. If the assumptions, 3 and 4 are satisfied, and if à k, Bk, α k, β k, γ k and δ k are the estimators obtained at the k-th step, and k > p then Ãk a.s 0 and B a.s k 0 as min{m, N}. Proof. See Appendix C.

12 Next we derive the asymptotic distribution of the proposed estimators. In the following theorem, we state the results on the distribution of the sequential. Theorem 6. If the assumptions,, 3 and 4 are satisfied, then ( θ θ 0 D d N 6 (0, σ cσ where D is the diagonal matrix as defined in Theorem and c = Σ = j= k= a(j, k A 0 + 7B 0 6A 0 B 0 36B 0 30B 0 36B 0 30B 0 6A 0 B 0 7A 0 + B 0 36A 0 30A 0 36A 0 30A 0 36B 0 36A A 0 + B 0 30B 0 30A B 0 36A B 0 30A Proof. See Appendix D. The above result holds true for all k p and is stated in the following theorem. Theorem 7. If the assumptions,, 3 and 4 are satisfied, then ( θ k θ 0 kd d N 6 (0, σ cσ k, where Σ k can be obtained by replacing A 0 by A 0 k and B 0 by B 0 k in Σ defined above. Proof. This proof can be obtained by proceeding exactly in the same manner as in the proof of Theorem 6. 6 Simulation Studies 6. Simulation results for the one component model We perform numerical simulations on model ( with the following parameters: A 0 =, B 0 = 3, α 0 =.5, β 0 = 0.5, γ 0 =.5 and δ 0 = 0.75.

13 3 The following error structures are used to generate the data:. X(m, n = ɛ(m, n. (8. X(m, n = ɛ(m, n + 0.5ɛ(m, n + 0.4ɛ(m, n + 0.3ɛ(m, n. (9 Here ɛ(m, n N(0, σ. For simulations we consider different values of σ and different values of M and N as can be seen in the tables. We estimate the parameters both by least squares estimation method and approximate least squares estimation method. These estimates are obtained 000 times each and averages, biases and MSEs are reported. We also compute the asymptotic variances to compare with the corresponding MSEs. From the tables above, it is observed that as the error variance increases, the MSEs also increase for both the LSEs and the. As the sample size increases, one can see that the estimates become closer to the corresponding true values, that is, the biases become small. Also, the MSEs decrease as the sample size, M and N increase, and the order of the MSEs of both the estimators is almost equivalent to the order of the asymptotic variances. Hence, one may conclude that they are well matched. The MSEs of the get close to those of LSEs as M and N increase and hence to the theoretical asymptotic variances of the LSEs, showing that they are asymptotically equivalent.

14 4 Parameters α β γ δ α β γ δ True values σ LSEs 0. Avg Bias E E-06.E E-07 MSE 8.E-05.83E E E E-07.3E E E-0 AVar 7.56E-07.3E E-07.3E E-07.3E E-07.3E-09 LSEs 0.5 Avg Bias E-05.9E E-06 MSE 9.78E E E E-07.03E-05.76E-08.0E-05.96E-08 AVar.89E-05.48E-08.89E-05.48E-08.89E-05.48E-08.89E-05.48E-08 LSEs Avg Bias E E-05 MSE.5E E-07 4.E E E-05.8E E-05.09E-07 AVar 7.56E-05.3E E-05.3E E-05.3E E-05.3E-07 Table : Estimates of the parameters of model ( when errors are i.i.d. random variables as defined in (8 and M = N = 5 Gaussian Parameters α β γ δ α β γ δ True values σ LSEs 0. Avg Bias E E-05 5.E E-08.97E E-08 MSE.95E E-09 3.E-07.03E-0.73E-08.04E- 3.07E-08.4E- AVar 4.73E-08.77E- 4.73E-08.77E- 4.73E-08.77E- 4.73E-08.77E- LSEs 0.5 Avg Bias E E-06.34E E E-06 -.E-07 MSE.E-05.04E-08.67E E E E E E-0 AVar.8E E-0.8E E-0.8E E-0.8E E-0 LSEs Avg Bias E E E E-06.66E E-08 MSE.4E-05.4E E-06.87E E-06.53E E-06.45E-09 AVar 4.73E-06.77E E-06.77E E-06.77E E-06.77E-09 Table : Estimates of the parameters of model ( when errors are i.i.d. random variables as defined in (8 and M = N = 50 Gaussian

15 5 Parameters α β γ δ α β γ δ True values σ LSEs 0. Avg Bias E E E E E E-09 MSE 3.5E E-.69E-07.68E- 8.9E-09.33E- 7.9E-0.04E-3 AVar 9.34E-09.56E- 9.34E-09.56E- 9.34E-09.56E- 9.34E-09.56E- LSEs 0.5 Avg Bias E E E E E E-08 MSE 3.80E E- 3.4E-07 4.E-.55E-07.6E-.07E-07.88E- AVar.33E E-.33E E-.33E E-.33E E- LSEs Avg Bias E E E-05-6.E E E-08 MSE.0E-06.73E E-07.38E-0 7.E-07.7E E E- AVar 9.34E-07.56E E-07.56E E-07.56E E-07.56E-0 Table 3: Estimates of the parameters of model ( when errors are i.i.d. random variables as defined in (8 and M = N = 75 Gaussian Parameters α β γ δ α β γ δ True values σ LSEs 0. Avg Bias -4.9E-05.99E E E-08.55E E-08 3.E E-0 MSE.60E E-.8E-08.78E- 5.38E E E E-4 AVar.95E-09.77E-3.95E-09.77E-3.95E-09.77E-3.95E-09.77E-3 LSEs 0.5 Avg Bias E E E E-08.34E-06.74E-08 MSE 8.4E-08.38E- 9.44E E- 3.83E E- 3.64E E- AVar 7.38E E- 7.38E E- 7.38E E- 7.38E E- LSEs Avg Bias E E E E-08.0E E-08 MSE.35E E-.7E-07.35E-.60E-07.57E-.9E-07.79E- AVar.95E-07.77E-.95E-07.77E-.95E-07.77E-.95E-07.77E- Table 4: Estimates of the parameters of model ( when errors are i.i.d. random variables as defined in (8 and M = N = 00 Gaussian

16 6 Parameters α β γ δ α β γ δ True values σ LSEs 0. Avg Bias E-05.8E E E-07 MSE 8.8E-05.84E E E-07.3E-06.70E-09.E-06.57E-09 AVar.3E-06.70E-09.3E-06.70E-09.3E-06.70E-09.3E-06.70E-09 LSEs 0.5 Avg Bias E-06.3E-06.59E-06 MSE.09E E E E E E-08.87E E-08 AVar.84E E-08.84E E-08.84E E-08.84E E-08 LSEs Avg Bias E E-06 MSE.9E E E E-07.3E-04.94E-07.4E-04.77E-07 AVar.3E-04.70E-07.3E-04.70E-07.3E-04.70E-07.3E-04.70E-07 Table 5: Estimates of the parameters of model ( when errors are stationary random variables as defined in (9 and M = N = 5 Parameters α β γ δ α β γ δ True values σ LSEs 0. Avg Bias E E-05.6E E E E-08 MSE.94E E E-07.3E-0 3.9E-08.54E- 4.35E-08.60E- AVar 7.09E-08.66E- 7.09E-08.66E- 7.09E-08.66E- 7.09E-08.66E- 0.5 Avg LSEs Bias E E E-05 -.E E E-06 MSE.4E-05.0E-08.3E E-0.47E E-0.45E E-0 AVar.77E E-0.77E E-0.77E E-0.77E E-0 LSEs Avg Bias E E E-05.7E E E-07 MSE.60E-05.4E E-06.77E-09 6.E-06.30E E-06.37E-09 AVar 7.09E-06.66E E-06.66E E-06.66E E-06.66E-09 Table 6: Estimates of the parameters of model ( when errors are stationary random variables as defined in (9 and M = N = 50

17 7 Parameters α β γ δ α β γ δ True values σ LSEs 0. Avg Bias E E E-06.58E-08.46E-06 -.E-08 MSE 3.E-07 6.E-.68E-07.7E-.E-08.03E- 9.4E-0.8E-3 AVar.40E-08.33E-.40E-08.33E-.40E-08.33E-.40E-08.33E- 0.5 Avg LSEs Bias E E E E-08.95E E-08 MSE 5.07E E- 4.75E E-.80E-07.80E-07.0E E- AVar 3.50E E- 3.50E E- 3.50E E- 3.50E E- LSEs Avg Bias E E E E E-05.9E-07 MSE.37E-06.39E-0.6E-06.90E-0.07E-06.80E-0 9.3E-07.58E-0 AVar.40E-06.33E-0.40E-06.33E-0.40E-06.33E-0.40E-06.33E-0 Table 7: Estimates of the parameters of model (when errors are stationary random variables as defined in (9 and M = N = 75 Parameters α β γ δ α β γ δ True values σ LSEs 0. Avg Bias -4.4E-05.98E E E E E E-07.36E-08 MSE.68E E-.5063E-08.0E- 9.6E-0.07E-3.8E-09.8E-3 AVar 4.43E E E E E E E E Avg LSEs Bias E E E E E E-08 MSE.36E-07.00E-.36E-07.6E- 6.3E E- 6.E E- AVar.E-07.04E-.E-07.04E-.E-07.04E-.E-07.04E- Avg LSEs Bias E E E-05.64E E E-08 MSE 3.66E E- 3.67E E-.73E-07.67E-.78E-07.67E- AVar 4.43E E- 4.43E E- 4.43E E- 4.43E E- Table 8: Estimates of the parameters of model ( when errors are stationary random variables as defined in (9 and M = N = 00

18 8 6. Simulation results for the multiple component model with p = Next we conduct numerical simulations on model (5 with p = and the following parameters: A 0 = 5, B 0 = 4, α 0 =., β 0 = 0., γ 0 =.5 and δ 0 = 0.5. A 0 = 3, B 0 =, α 0 =.5, β 0 = 0.5, γ 0 =.75 and δ 0 = 0.75 The error structures used to generate the data are same as that used for the one component model, see equations, (8 and (9. For simulations we consider different values of σ and different values of M and N, again same as that for the one component model. We estimate the parameters both by least squares estimation method and approximate least squares estimation method. These estimates are obtained 000 times each and averages, biases, MSEs and asymptotic variances are computed. The results are reported in the following tables. From the tables, it can be seen that the estimates, both the and the LSEs are quite close to their true values. It is observed that the estimates of the second component are better than those of the first component, in the sense that their biases and MSEs are smaller and the MSEs are better matched with the corresponding asymptotic variances. For both the estimators, as the sample size increases, the MSEs and the biases of the estimates of both components, decrease thus showing consistency. 7 Data Analysis We perform analysis of a simulated data set to exemplify how we can extract regular gray-scale texture from the one that is contaminated with noise. The data y(m, n is generated using ( with the following true parameter values: A 0 = 6, B 0 = 6, α 0 =.75, β 0 = 0.05, γ 0 =.5 and δ 0 = 0.075

19 9 Parameters α β γ δ α β γ δ True values σ Average Bias E MSE.36E E E E-0.36E-04.45E-06.68E E-0 0. LSEs Average Bias E E E-05 MSE 9.70E E E-05.85E E-06.75E-09.93E-06.E-09 AVar.40E E-0.40E E E-07.3E E-07.3E-09 Average Bias E MSE.44E E E E-09.44E-04.48E-06.87E E LSEs Average Bias E E E-05 MSE.66E E E-05.04E-08.48E E-08.55E E-08 AVar 5.99E E E E-09.89E-05.84E-08.89E-05.84E-08 Average Bias E MSE.65E E E E-08.65E-04.53E E E-08 LSEs Average Bias E E E-05 MSE 3.63E E E E E-05.E E-05.8E-07 AVar.40E E-08.40E E E-05.3E E-05.3E-07 Table 9: Estimates of the parameters of model (5 when errors are i.i.d Gaussian random variables as defined in (8 and M = N = 5

20 0 Parameters α β γ δ α β γ δ True values σ Average Bias E E-05 MSE.6E-06.9E E-05.07E-08.6E E E-06.07E LSEs Average Bias E E E-05 MSE.E-06.4E-0 5.8E-05.49E E-07.99E-0 5.6E-07.60E-0 AVar.50E E-.50E E- 4.73E-08.77E- 4.73E-08.77E- Average Bias E E-05 MSE.57E E E-05.08E-08.57E E E-06.08E LSEs Average Bias E E E-05 MSE.53E-06.59E-0 5.E-05.50E-08.75E E-0.67E E-0 AVar 3.75E-07.40E E-07.40E-0.8E E-0.8E E-0 Average Bias E E-05 MSE.69E E-0 9.5E-05.3E-08.69E E-08.8E-05.3E-08 LSEs Average Bias E E E-06 MSE.6E E-0 5.3E-05.54E E-06.84E-09 5.E-06.80E-09 AVar.50E E-0.50E E E-06.77E E-06.77E-09 Table 0: Estimates of the parameters of model (5 when errors are i.i.d Gaussian random variables as defined in (8 and M = N = 50

21 Parameters α β γ δ α β γ δ True values σ Average Bias -6.54E E E E E-06 MSE 7.7E E-3.6E-05.8E E E- 3.97E-07.8E LSEs Average Bias -.64E E E E E-07.94E E-07 MSE 4.6E E E-06.07E-09.7E-08.83E- 9.57E-09.40E- AVar.96E E-3.96E E E-09.56E- 9.34E-09.56E- Average Bias -7.54E E E E E-06 MSE 8.85E-08.35E-.6E-05.83E E E- 6.3E-07.83E LSEs Average Bias -3.6E-05.8E E E E-07.33E E-08 MSE 8.36E-08.34E- 7.9E-06.08E-09.9E-07 3.E-.54E E- AVar 7.40E-08.3E- 7.40E-08.3E-.33E E-.33E E- Average Bias -4.69E E E E E-06 MSE 3.07E E-.0E-05.88E E-07.58E-0.4E-06.88E-09 LSEs Average Bias -9.67E E E E E E E-07 MSE.94E E- 8.4E-06.3E E-07.45E-0.0E-06.60E-0 AVar.96E E-.96E E- 9.34E-07.56E E-07.56E-0 Table : Estimates of the parameters of model (5 when errors are i.i.d Gaussian random variables as defined in (8 and M = N = 75

22 Parameters α β γ δ α β γ δ True values σ Average Bias E E E E E-07 MSE 3.00E E- 6.07E-08.0E- 3.00E-07.7E E-09.0E- 0. LSEs Average Bias E E E E E E-08 MSE.9E E-.99E-07.7E-.8E-09.56E-3.64E-09.49E-3 AVar 9.37E E E E-4.95E-09.77E-3.95E-09.77E-3 Average Bias E E E E E-07 MSE 3.3E E- 7.89E-08.35E- 3.3E-07.78E-0 7.6E-08.35E- 0.5 LSEs Average Bias E E E E E E-08 MSE.39E E-.7E-07.44E- 7.6E E- 7.3E E- AVar.34E-08.0E-.34E-08.0E- 7.38E E- 7.38E E- Average Bias E E E E-07 MSE 4.0E E-.5E-07.08E- 4.0E-07.98E-0 3.5E-07.08E- LSEs Average Bias E E-06.66E E E E-07 MSE 3.4E-07.59E-.93E-07.7E- 3.8E-07.98E- 3.6E-07.85E- AVar 9.37E E- 9.37E E-.95E-07.77E-.95E-07.77E- Table : Estimates of the parameters of model (5 when errors are i.i.d Gaussian random variables as defined in (8 and M = N = 00

23 3 Parameters α β γ δ α β γ δ True values σ Average Bias E MSE.36E E E E-0.36E-04.46E-06.67E E-0 0. LSEs Average Bias E E E-05 MSE 9.68E E E-05.95E E-06.3E-09.34E-06.79E-09 Avar 3.60E E E E-0.3E-06.70E-09.3E-06.70E-09 Average Bias E MSE.44E E E-05.44E-08.44E-04.49E-06.95E-04.44E LSEs Average Bias E E E-05 MSE.77E E E-05.49E E E E E-08 Avar 8.99E-06.35E E-06.35E-08.84E E-08.84E E-08 Average Bias E MSE.88E-04 4.E-07.06E E-08.88E-04.6E E E-08 LSEs Average Bias E E E-05 MSE 4.65E E E E-08.7E-04.88E-07.40E-04.07E-07 Avar 3.60E E E E-08.3E-04.70E-07.3E-04.70E-07 Table 3: Estimates of the parameters of model (5 when errors are stationary random variables as defined in (9 and M = N = 5

24 4 Parameters α β γ δ α β γ δ True values σ Average Bias E E-05 MSE.9E-06.0E E-05.07E-08.9E E E-06.07E LSEs Average Bias E E E-05 MSE.6E-06.34E-0 5.9E-05.49E E-07.E E-07.69E-0 AVar.5E E-.5E E- 7.09E-08.66E- 7.09E-08.66E- Average Bias E E-05 MSE.5E E-0 9.4E-05.09E-08.5E E-08.06E-05.09E LSEs Average Bias E E E-05 MSE.48E-06.97E-0 5.3E-05.5E-08.5E E-0.8E E-0 Avar 5.6E-07.E-0 5.6E-07.E-0.77E E-0.77E E-0 Average Bias E E-05 MSE 3.6E-06.07E E-05.5E E E-08.59E-05.5E-08 LSEs Average Bias E E E-05 MSE 3.07E E E-05.56E E-06.7E E-06.8E-09 Avar.5E E-0.5E E E-06.66E E-06.66E-09 Table 4: Estimates of the parameters of model (5 when errors are stationary random variables as in (9 and M = N = 50

25 5 Parameters α β γ δ α β γ δ True values σ Average Bias -6.49E E E E E-06 MSE 7.5E E-3.6E-05.8E E-09.05E- 3.90E-07.8E LSEs Average Bias -.64E E E E E-07.06E E-09 MSE 4.06E E E-06.07E-09.74E-08.7E-.36E-08.3E- AVar 4.44E E E E-3.40E-08.33E-.40E-08.33E- Average Bias -6.36E E E E E-06 MSE 7.47E-08.8E-.7E-05.83E E E- 7.66E-07.83E LSEs Average Bias -.57E E E E E E E-07 MSE 7.04E-08.4E- 7.94E-06.08E-09.75E E- 3.78E E- AVar.E-07.85E-.E-07.85E- 3.50E E- 3.50E E- Average Bias -5.65E E E E E-06 MSE 3.03E E-.E-05.9E E-07.96E-0.79E-06.9E-09 LSEs Average Bias -.E E E E E-07.94E E-07 MSE.90E E- 8.36E-06.6E-09.0E-06.87E-0.45E-06.7E-0 AVar 4.44E E- 4.44E E-.40E-06.33E-0.40E-06.33E-0 Table 5: Estimates of the parameters of model (5 when errors are stationary random variables as in (9 and M = N = 75

26 6 Parameters α β γ δ α β γ δ True values σ Average Bias E E E E E-08 MSE 3.04E E- 6.09E-08.9E- 3.04E-07.7E-0.7E-08.9E- 0. LSEs Average Bias E E-06.4E E E E-08 MSE.8E-07 7.E-.00E-07.8E- 4.35E E-3 4.6E E-3 AVar.40E-09.3E-3.40E-09.3E E E E E-3 Average Bias E E E E E-07 MSE 3.0E E- 8.98E-08.47E- 3.0E-07.8E-0.E-07.47E- 0.5 LSEs Average Bias E E-06.33E E E E-08 MSE.48E E-.7E-07.56E-.9E-07.0E-.9E-07.08E- AVar 3.5E E- 3.5E E-.E-07.04E-.E-07.04E- Average Bias E E E E-05 -.E-07 MSE 4.7E E-.86E-07.43E- 4.7E-07.7E-0 4.8E-07.43E- LSEs Average Bias E E-06.30E E-08.39E E-07 MSE.97E-07.57E- 3.7E-07.5E- 4.5E-07 4.E- 4.90E E- AVar.40E-07.3E-.40E-07.3E- 4.43E E- 4.43E E- Table 6: Estimates of the parameters of model (5 when errors are stationary random variables as in (9 and M = N = 00

27 7 and the error random variables {X(m, n} are generated as follows: X(m, n = ɛ(m, n + ρ ɛ(m, n + ρ ɛ(m, n + ρ 3 ɛ(m, n, where ɛ(m, n are Gaussian random variables with mean 0 and variance σ = and ρ = 0.5, ρ = 0.4 and ρ 3 = 0.3. We generate y(m, n for M = N = 00. Figure displays the true signal generated with the above mentioned parameters and Figure displays the noisy signal that is the true signal along with the additive error X(m, n defined above. Figure : True Signal Figure : Noisy Signal Using the generated data matrix, we now fit model ( using both least squares estimation method and approximate least squares estimation method. Following are the values of the estimates that we attain: LSEs: Â = , ˆB = , ˆα = , ˆβ = , ˆγ = , ˆδ = : Ã = , B = , α =.7567, β = , γ =.50075, δ =

28 8 Figure 3: Estimated Signal using LSEs Figure 4: Estimated Signal using We plot the estimated signals using the above obtained estimates. The plots of the signals estimated using LSEs and are given in Figures 3 and 4 respectively. Thus we may conclude that the estimated signal plots both using LSEs and are well matched with the true signal plot, as is evident from the figures above. 8 Conclusion In this paper, we propose approximate least squares estimators ( to estimate the unknown parameters of a one component -D chirp model. We study their asymptotic properties. We show that they are strongly consistent and asymptotically, normally distributed and equivalent to the LSEs. The consistency of the of the linear parameters is obtained under slightly weaker conditions than that obtained for the LSEs of the linear parameters, as we need not bound the parameter space in the former case. Also, the rate of convergence of the linear parameters is M / N /, that of frequencies α and γ are M 3/ N / and M / N 3/ respectively and that of frequency rates β and δ are M 5/ N / and M / N 5/ respectively, same as that of corresponding LSEs. Through simulation studies as well, we deduce that the estimators are consistent and asymptotically equivalent to the LSEs. We also propose sequential procedure to obtain the of a multiple component -D chirp model, with the number of components to be known and study their asymptotic properties. We see that the results obtained for the one component model can be extended to the generalised model, that is the

29 9 multiple component model. Appendix A The following lemmas are required to prove Theorem. Lemma 5. Consider the set S ϑ0 c = {ϑ : ϑ ϑ 0 4c}, if for any c > 0, lim sup sup ϑ S ϑ0 c then, ϑ ϑ 0 almost surely as min{m, N}. (I(ϑ I(ϑ0 < 0 a.s. (0 Proof. Let us denote ϑ 0 = (α 0, β 0, γ 0, δ 0 and ϑ = ( α, β, γ, δ by ϑ to assert that it depends on M and N. Suppose (0 is true and ϑ ϑ 0 almost surely as min{m, N}, then there exists a c > 0 and a subsequence {M k, N k } of {M, N} such that ϑ Mk N k Sc ϑ0 for all k =,,. Since ϑ Mk N k is the ALSE of ϑ 0 when M = M k and N = N k, lim sup sup ϑ S ϑ0 c M k N k This contradicts (0. Hence α a.s. α 0, ( I Mk N k (ϑ I Mk N k (ϑ 0 0 a.s. β β 0, γ a.s. γ 0 a.s. and δ δ 0. Lemma 6. If assumptions and are satisfied then: M( α α 0 a.s. 0, M ( β β 0 a.s. 0, N( γ γ 0 a.s. 0 and N ( δ δ 0 a.s. 0 Proof. Let I (ϑ be 4 first derivative vector and I (ϑ be 4 4 second derivative matrix of I(ϑ. Using multivariate Taylor series expansion of I(ϑ around ϑ 0, we have: I ( ϑ I (ϑ 0 = ( ϑ ϑ 0 I ( ϑ Since ϑ is the ALSE of ϑ 0, I ( ϑ = 0 ( ϑ ϑ 0 ( D = I (ϑ 0 D [D I ( ϑd ] (

30 30 where, D = diag(m 3/ N /, M 5/ N /, M / N 3/, M / N 5/. To show that the left hand side of equation ( goes to 0 as min{m, N}, we first consider the vector I (ϑ 0 = [ [ = I(ϑ 0 α I(ϑ 0 M N α I(ϑ 0 β I(ϑ 0 M 3 N β From (, we can write I(ϑ as: I(ϑ 0 γ I(ϑ 0 δ I(ϑ 0 γ M 3 N ] I(ϑ 0 3 δ 0 M 5 N M N = ( N M + y(m, n cos(α 0 m + β 0 m + γ 0 n + δ 0 n ( N M. y(m, n sin(α 0 m + β 0 m + γ 0 n + δ 0 n ] M N 5 ( Thus, I(ϑ 0 M N α = 4 ( N M 3 N n= ( + 4 M 3 N M m= ( N M ( N M y(m, n cos(α 0 m + β 0 m + γ 0 n + δ 0 n m y(m, n sin(α 0 m + β 0 m + γ 0 n + δ 0 n y(m, n sin(α 0 m + β 0 m + γ 0 n + δ 0 n m y(m, n cos(α 0 m + β 0 m + γ 0 n + δ 0 n. Now using equation (, taking limit as min{m, N}, and then using Lemma, parts (c-(g, we have: I(ϑ 0 M N α a.s. 0. On similar lines, one can show that rest of the elements of the above vector ( tend to 0 as min{m, N}. Thus, we have: lim n I (ϑ 0 = 0. (3

31 3 Now we consider the second derivative matrix: D I ( ϑd. Since I (ϑ is a continuous function of ϑ, where, D I (ϑ 0 D = lim D I ( ϑd = lim n n D I (ϑ 0 D, I(ϑ 0 M 3 N α M 4 N M 4 N I(ϑ 0 β α I(ϑ 0 M N γ α I(ϑ 0 M N 3 δ α M 5 N I(ϑ 0 α β I(ϑ 0 I(ϑ 0 β M 3 N β γ I(ϑ 0 M 3 N γ β I(ϑ 0 M 3 N 3 δ β I(ϑ 0 M N α γ I(ϑ 0 M N 3 α δ I(ϑ 0 M 3 N 3 β δ I(ϑ 0 I(ϑ 0 3 γ 4 γ δ I(ϑ 0 4 δ γ I(ϑ 0 5 δ Using Lemma, parts (c - (g provided in section, we obtain the following: where, A 0 +B 0 A 0 +B 0 S = lim D I (ϑ 0 a.s. D = S (4 min{m,n} A 0 +B (A 0 +B A 0 +B A 0 +B 0 A 0 +B 0 4(A 0 +B 0 45 > 0. Here, a matrix A > 0, means that it is a positive definite matrix. From (3 and (4, we get the desired result.. Proof of Theorem : We first prove the consistency of the non-linear parameters. Consider the difference: [ ] I(ϑ I(ϑ 0 = Consider the set S ϑ0 c rewritten as S (ϑ0,α c Here, S (ϑ0,α c M N ( y(m, ne i(αm+βm +γn+δn m= n= M N ( y(m, ne i(α0 m+β 0 m +γ 0 n+δ 0 n m= n= defined in 5. We split this set into four sets and thus it can be S (ϑ0,β c S (ϑ0,γ c = {α : α α 0 > c}, S (ϑ0,β c S (ϑ0,δ c. = {β : β β 0 > c}, S (ϑ0,γ c = {γ : γ γ 0 > c}

32 3 and S (ϑ0,δ c = {δ : δ δ 0 > c}. [ ] lim sup sup I(ϑ I(ϑ 0 ϑ S (ϑ0,α c [ { M N = lim A 0 cos (α 0 m + β 0 m + γ 0 n + δ 0 n } + min{m,n} m= n= { } M N ] B 0 sin (α 0 m + β 0 m + γ 0 n + δ 0 n = (A0 + B 0 < 0 a.s. m= n= using Lemma, parts (e and (f. Similarly, for all other sets S (ϑ0,β c, S (ϑ0,γ c and S (ϑ0,δ c, this can be shown. [ ] Hence combining, we have: Lemma 5, we get the desired result. lim sup sup ϑ S ϑ0 c I(ϑ I(ϑ 0 < 0 a.s. Thus, using Appendix B Proof of Theorem : Consider the following: Q (θ = M N ( y(m, n A cos(ϑ T u(m, n B sin(ϑ T u(m, n m= n= = C J (θ + o( Here, C = M N y(m, n and, m= n= M J (θ = N m= n= A + B. ( y(m, n A cos(αm + βm + γn + δn + B sin(αm + βm + γn + δn

33 33 Now we compute the first derivative of J (θ and Q (θ at θ = θ 0 and using Lemma, parts (c - (g, we obtain the following relation between them: o(m N o(m N Q (θ 0 D = J (θ 0 o(m 3 N D + o(m 5 N o(m N 3 o(m N 5 T D. (5 Since the second expression of equation( 5 goes to 0, as min{m, N}, we have: lim min{m,n} Q (θ 0 D = lim min{m,n} J (θ 0 D. (6 It can be easily seen that, at à = Â(α, β, γ, δ and B = ˆB(α, β, γ, δ, where I(α, β, γ, δ is as defined in (. J (Ã, B, α, β, γ, δ = I(α, β, γ, δ, Hence the estimator of θ 0 which maximizes J (θ is equivalent to θ, the ALSE of θ 0. Thus, the ALSE θ in terms of J (θ can be written as the following, using Taylor series expansion: ( θ θ 0 D = [J (θ 0 D][DJ ( θd]. (7 Note that, lim min{m,n} [DJ ( θd] = lim min{m,n} [DJ (θ 0 D]. Now comparing the corresponding elements of the second derivative matrices DJ (θ 0 D and DQ (θ 0 D and using Lemma, parts (c - (f, on each of the derivatives as done for the first derivative vectors above, we obtain the following relation: lim min{m,n} DJ (θ 0 D = lim min{m,n} DQ (θ 0 D = Σ (8

34 34 where, Σ = B / / A0 B 0 A0 4 4 B 0 A0 6 6 B 0 A0 4 4 B 0 A0 6 6 B 0 6 B 0 4 B A0 6 A0 4 A0 6 A 0 +B 0 A 0 +B 0 A 0 +B 0 A 0 +B A 0 +B 0 A 0 +B 0 A 0 +B 0 A 0 +B A 0 +B 0 A 0 +B 0 A 0 +B 0 A 0 +B A 0 +B 0 A 0 +B 0 A 0 +B 0 A 0 +B Using (6 and (8, in equation (7, we have: lim ( θ θ 0 D = lim (ˆθ θ 0 D. min{m,n} min{m,n} It follows that LSE, ˆθ and ALSE, θ of θ 0 of model ( are asymptotically equivalent in distribution. Appendix C The following lemmas are required to prove the Theorem 3: Lemma 7. Consider the set S ϑ0 c defined in Lemma 5. If for some c >0, lim sup sup S ϑ0 c (I (ϑ I (ϑ 0 < 0 a.s., then ϑ is a strongly consistent estimator of ϑ 0. Here I (ϑ is as defined in (6. Proof. This proof can be obtained on the same lines as Lemma 5. Lemma 8. If assumptions and 3 are true, then the following holds true: ( ϑ ϑ 0 ( D a.s. 0. Proof. This proof can be obtained along the same lines as Lemma 6 by replacing I(ϑ by I (ϑ.

35 35 Proof of Theorem 3: First we prove the consistency of the estimates of the non-linear parameters of the first component of the model. For notational simplicity, we assume p =. We consider the difference: ( ( I (ϑ I (ϑ 0 = I (α, β, γ, δ I (α, 0 β, 0 γ, 0 δ 0 [ = y(m, ne i(αm+βm +γn+δn N ( M y(m, ne i(α0 m+β0 m +γ 0n+δ0 n ] The set S ϑ0 c, can be split into two parts and written as S ϑ0 ϑ c S 0 c, where S ϑ0 c = {(ϑ : ϑ ϑ 0 > 4c; ϑ = ϑ 0 } and S ϑ0 c = {(ϑ : ϑ ϑ 0 > 4c; ϑ ϑ 0 }. ( lim sup sup I (α, β, γ, δ I (α 0 min{m,n} Sc, β, 0 γ, 0 δ 0 [{ N M ( = lim sup sup A 0 min{m,n} Sc ( k cos(αkm 0 + βkm 0 + γkn 0 + δkn 0 + k= } ] Bk 0 sin(αkm 0 + βkm 0 + +γkn 0 + δkn 0 + X(m, n cos(αm + βm + γn + δn + [{ N M ( lim sup sup A 0 min{m,n} Sc ( k cos(αkm 0 + βkm 0 + γkn 0 + δkn 0 + k= } ] Bk 0 sin(αkm 0 + βkm 0 + +γkn 0 + δkn 0 + X(m, n sin(αm + βm + γn + δn [{ N M ( lim sup sup A 0 min{m,n} Sc ( k cos(αkm 0 + βkm 0 + γkn 0 + δkn 0 + k= } ] Bk 0 sin(αkm 0 + βkm 0 + γkn 0 + δkn 0 + X(m, n cos(αm 0 + βm 0 + +γn 0 + δn 0 [{ N M ( lim sup sup A 0 min{m,n} Sc ( k cos(αkm 0 + βkm 0 + +γkn 0 + δkn 0 + k= } ] Bk 0 sin(αkm 0 + βkm 0 + +γkn 0 + δkn 0 + X(m, n sin(αm 0 + βm 0 + γn 0 + δn 0 = 4 (A0 + B 0 A 0 B 0 < 0 a.s. (Assumption 4.. Similarly, ( lim sup sup I (α, β, γ, δ I (α 0 min{m,n} Sc, β, 0 γ, 0 δ 0 = 4 (0 + 0 A0 B 0 < 0 a.s. Therefore, using Lemma 7, we get the consistency of the non-linear parameters of the

36 36 first component. Now to prove the consistency of linear parameter estimators à and B, observe that à = y(m, n cos( α m + β m + γ n + δ n = ( p A 0 k cos(α km 0 + βkm 0 + γkn 0 + δkn 0 + Bk 0 sin(αkm 0 + βkm 0 + γkn 0 + δkn 0 k= + X(m, n cos( α m + β m + γ n + δ n. Using Lemma, part (g N M X(m, n cos( α m + β m + γ n + δ n 0. Now expanding cos( α m + β m + γ n + δ n by multivariate Taylor series around (α 0, β 0, γ 0, δ 0 and using Lemmas 8 and,(c-(f, we get the desired result. Proof of Theorem 4: From Theorem 3 and Lemmas 6 and 7, we have the following: Thus, à = A 0 + o( B = B 0 + o(, α = α 0 + o(m β = β 0 + o(m. γ = γ 0 + o(n δ = δ 0 + o(n. à cos( α m + β m + γ n + δ n + B sin( α m + β m + γ n + δ n = A 0 cos(α 0 m + β 0 m + γ 0 n + δ 0 n + B 0 sin(α 0 m + β 0 m + γ 0 n + δ 0 n + o(. (9 Consider the set S ϑ0 c, that can be split into p sets and written as S ϑ0 c where S ϑ0 c = {ϑ : ϑ ϑ 0 > 4c; ϑ = ϑ 0 }, S ϑ0 c = {ϑ : ϑ ϑ 0 > 4c; ϑ = ϑ 0 3},. S ϑ0 p c = {ϑ : ϑ ϑ 0 S ϑ0 c > 4c; ϑ = ϑ 0 p}, and p = {ϑ : ϑ ϑ 0 > 4c; ϑ ϑ 0 k, for any k =,, p}. S ϑ 0 c ϑ S 0 p c,

37 37 Now we consider the following difference: ( I (α, β, γ, δ I (α 0, β, 0 γ, 0 δ 0 = ( y (m, ne i(αm+βm +γn+δn ( y (m, ne i(α0 m+β0 m +γ 0n+δ0 n. (0 Here, y (m, n = y(m, n à cos( α m + β m + γ n + δ n + B sin( α m + β m + γ n+ δ n, that is the new data obtained by removing the effect of the first component from the observed data y(m, n. Using (9, we have p y (m, n = o(+ (A 0 k cos(αkm+β 0 km 0 +γkn+δ 0 kn 0 +Bk 0 sin(αkm+β 0 km 0 +γkn+δ 0 kn 0 +X(m, n. k= Substituting this in (0, it can be easily seen that: ( lim sup sup I (α, β, γ, δ I (α 0 min{m,n} Sc k, β, 0 γ, 0 δ 0 < 0 a.s. k =,, p. ( Combining, we have lim sup sup I (α, β, γ, δ I (α, 0 β, 0 γ, 0 δ 0 < 0 a.s. min{m,n} S c a.s. Therefore, α α, 0 a.s. β β, 0 a.s. γ γ 0 and δ a.s. δ 0 by Lemma 7. Following the same argument as in Theorem 3, we can prove the consistency of linear parameter estimators à and B. Proceeding in a similar way, it can be shown that θ k a.s. θ 0 k for 3 k p. Proof of Theorem 6: The of the linear parameters A and B are given by: à = y(m, n cos( αm + βm + γn + δn, and B = y(m, n sin( αm + βm + γn + δn. For k = p +, à p+ = y p+ (m, n cos( α p+ m + n β p+ m + γ p+ n + δ p+ n where α p+, β p+, γ p+ and δ ( p+ are obtained by maximising I p+ (α, β, γ, δ, and y p+ p (m, n = y(m, n à k cos( α k m + β k m + γ k n + δ k n + B k cos( α k m + β k m + k= γ k n + δ k n. Using (9, we have: à p+ = X(m, n cos( α p+ m + β p+ m + γ p+ n + δ p+ n + o(, B p+ = X(m, n sin( α p+ m + β p+ m + γ p+ n + δ p+ n + o(.

On Parameter Estimation of Two Dimensional Chirp Signal

On Parameter Estimation of Two Dimensional Chirp Signal On Parameter Estimation of Two Dimensional Chirp Signal Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract Two dimensional (-D) chirp signals occur in different areas of image processing. In this paper,

More information

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Statistica Sinica 8(008), 87-0 PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Debasis Kundu and Swagata Nandi Indian Institute of Technology, Kanpur and Indian Statistical Institute

More information

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE DEBASIS KUNDU AND SWAGATA NANDI Abstract. The problem of parameter estimation of the chirp signals in presence of stationary noise

More information

ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL

ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL 1 ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL SWAGATA NANDI 1 AND DEBASIS KUNDU Abstract. In this paper, we propose a modification of the multiple sinusoidal model such that periodic

More information

On Two Different Signal Processing Models

On Two Different Signal Processing Models On Two Different Signal Processing Models Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 15, 2015 Outline First Model 1 First Model 2 3 4 5 6 Outline First Model 1

More information

EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL

EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 Abstract. Chirp signals play an important role in the statistical signal processing.

More information

An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals

An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals isid/ms/8/ November 6, 8 http://www.isid.ac.in/ statmath/eprints An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals Swagata Nandi Anurag Prasad Debasis

More information

On Least Absolute Deviation Estimators For One Dimensional Chirp Model

On Least Absolute Deviation Estimators For One Dimensional Chirp Model On Least Absolute Deviation Estimators For One Dimensional Chirp Model Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract It is well known that the least absolute deviation (LAD) estimators are more

More information

ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS

ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS Debasis Kundu Department of Mathematics Indian Institute of Technology Kanpur Kanpur, Pin 20806 India Abstract:

More information

Estimating Periodic Signals

Estimating Periodic Signals Department of Mathematics & Statistics Indian Institute of Technology Kanpur Most of this talk has been taken from the book Statistical Signal Processing, by D. Kundu and S. Nandi. August 26, 2012 Outline

More information

An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals

An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals Swagata Nandi 1 Debasis Kundu Abstract A computationally efficient algorithm is proposed for estimating the parameters

More information

Asymptotic properties of the least squares estimators of a two dimensional model

Asymptotic properties of the least squares estimators of a two dimensional model Metrika (998) 48: 83±97 > Springer-Verlag 998 Asymptotic properties of the least squares estimators of a two dimensional model Debasis Kundu,*, Rameshwar D. Gupta,** Department of Mathematics, Indian Institute

More information

Noise Space Decomposition Method for two dimensional sinusoidal model

Noise Space Decomposition Method for two dimensional sinusoidal model Noise Space Decomposition Method for two dimensional sinusoidal model Swagata Nandi & Debasis Kundu & Rajesh Kumar Srivastava Abstract The estimation of the parameters of the two dimensional sinusoidal

More information

Amplitude Modulated Model For Analyzing Non Stationary Speech Signals

Amplitude Modulated Model For Analyzing Non Stationary Speech Signals Amplitude Modulated Model For Analyzing on Stationary Speech Signals Swagata andi, Debasis Kundu and Srikanth K. Iyer Institut für Angewandte Mathematik Ruprecht-Karls-Universität Heidelberg Im euenheimer

More information

Determination of Discrete Spectrum in a Random Field

Determination of Discrete Spectrum in a Random Field 58 Statistica Neerlandica (003) Vol. 57, nr., pp. 58 83 Determination of Discrete Spectrum in a Random Field Debasis Kundu Department of Mathematics, Indian Institute of Technology Kanpur, Kanpur - 0806,

More information

Adjusted Empirical Likelihood for Long-memory Time Series Models

Adjusted Empirical Likelihood for Long-memory Time Series Models Adjusted Empirical Likelihood for Long-memory Time Series Models arxiv:1604.06170v1 [stat.me] 21 Apr 2016 Ramadha D. Piyadi Gamage, Wei Ning and Arjun K. Gupta Department of Mathematics and Statistics

More information

LIST OF PUBLICATIONS

LIST OF PUBLICATIONS LIST OF PUBLICATIONS Papers in referred journals [1] Estimating the ratio of smaller and larger of two uniform scale parameters, Amit Mitra, Debasis Kundu, I.D. Dhariyal and N.Misra, Journal of Statistical

More information

Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh

Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh Journal of Statistical Research 007, Vol. 4, No., pp. 5 Bangladesh ISSN 056-4 X ESTIMATION OF AUTOREGRESSIVE COEFFICIENT IN AN ARMA(, ) MODEL WITH VAGUE INFORMATION ON THE MA COMPONENT M. Ould Haye School

More information

Frequency estimation by DFT interpolation: A comparison of methods

Frequency estimation by DFT interpolation: A comparison of methods Frequency estimation by DFT interpolation: A comparison of methods Bernd Bischl, Uwe Ligges, Claus Weihs March 5, 009 Abstract This article comments on a frequency estimator which was proposed by [6] and

More information

Generalised AR and MA Models and Applications

Generalised AR and MA Models and Applications Chapter 3 Generalised AR and MA Models and Applications 3.1 Generalised Autoregressive Processes Consider an AR1) process given by 1 αb)x t = Z t ; α < 1. In this case, the acf is, ρ k = α k for k 0 and

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

Minimax lower bounds I

Minimax lower bounds I Minimax lower bounds I Kyoung Hee Kim Sungshin University 1 Preliminaries 2 General strategy 3 Le Cam, 1973 4 Assouad, 1983 5 Appendix Setting Family of probability measures {P θ : θ Θ} on a sigma field

More information

Estimation of parametric functions in Downton s bivariate exponential distribution

Estimation of parametric functions in Downton s bivariate exponential distribution Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Assume X P θ, θ Θ, with joint pdf (or pmf) f(x θ). Suppose we observe X = x. The Likelihood function is L(θ x) = f(x θ) as a function of θ (with the data x held fixed). The

More information

Fitting circles to scattered data: parameter estimates have no moments

Fitting circles to scattered data: parameter estimates have no moments arxiv:0907.0429v [math.st] 2 Jul 2009 Fitting circles to scattered data: parameter estimates have no moments N. Chernov Department of Mathematics University of Alabama at Birmingham Birmingham, AL 35294

More information

Bayesian Nonparametric Point Estimation Under a Conjugate Prior

Bayesian Nonparametric Point Estimation Under a Conjugate Prior University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions.

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions. Large Sample Theory Study approximate behaviour of ˆθ by studying the function U. Notice U is sum of independent random variables. Theorem: If Y 1, Y 2,... are iid with mean µ then Yi n µ Called law of

More information

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators Robert M. de Jong Department of Economics Michigan State University 215 Marshall Hall East

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Likelihood-Based Methods

Likelihood-Based Methods Likelihood-Based Methods Handbook of Spatial Statistics, Chapter 4 Susheela Singh September 22, 2016 OVERVIEW INTRODUCTION MAXIMUM LIKELIHOOD ESTIMATION (ML) RESTRICTED MAXIMUM LIKELIHOOD ESTIMATION (REML)

More information

Spectral Analysis of Non-Uniformly Sampled Data: A New Approach Versus the Periodogram

Spectral Analysis of Non-Uniformly Sampled Data: A New Approach Versus the Periodogram IEEE TRANSACTIONS ON SIGNAL PROCESSING 1 Spectral Analysis of Non-Uniformly Sampled Data: A New Approach Versus the Periodogram Petre Stoica, Fellow, IEEE, Jian Li, Fellow, IEEE, Hao He Abstract We begin

More information

Classical Mechanics. Luis Anchordoqui

Classical Mechanics. Luis Anchordoqui 1 Rigid Body Motion Inertia Tensor Rotational Kinetic Energy Principal Axes of Rotation Steiner s Theorem Euler s Equations for a Rigid Body Eulerian Angles Review of Fundamental Equations 2 Rigid body

More information

Estimation of Threshold Cointegration

Estimation of Threshold Cointegration Estimation of Myung Hwan London School of Economics December 2006 Outline Model Asymptotics Inference Conclusion 1 Model Estimation Methods Literature 2 Asymptotics Consistency Convergence Rates Asymptotic

More information

Nonlinear Error Correction Model and Multiple-Threshold Cointegration May 23, / 31

Nonlinear Error Correction Model and Multiple-Threshold Cointegration May 23, / 31 Nonlinear Error Correction Model and Multiple-Threshold Cointegration Man Wang Dong Hua University, China Joint work with N.H.Chan May 23, 2014 Nonlinear Error Correction Model and Multiple-Threshold Cointegration

More information

Modification and Improvement of Empirical Likelihood for Missing Response Problem

Modification and Improvement of Empirical Likelihood for Missing Response Problem UW Biostatistics Working Paper Series 12-30-2010 Modification and Improvement of Empirical Likelihood for Missing Response Problem Kwun Chuen Gary Chan University of Washington - Seattle Campus, kcgchan@u.washington.edu

More information

Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model

Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model 1. Introduction Varying-coefficient partially linear model (Zhang, Lee, and Song, 2002; Xia, Zhang, and Tong, 2004;

More information

Pubh 8482: Sequential Analysis

Pubh 8482: Sequential Analysis Pubh 8482: Sequential Analysis Joseph S. Koopmeiners Division of Biostatistics University of Minnesota Week 7 Course Summary To this point, we have discussed group sequential testing focusing on Maintaining

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Article Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Fei Jin 1,2 and Lung-fei Lee 3, * 1 School of Economics, Shanghai University of Finance and Economics,

More information

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools Joan Llull Microeconometrics IDEA PhD Program Maximum Likelihood Chapter 1. A Brief Review of Maximum Likelihood, GMM, and Numerical

More information

B y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal

B y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal Structural VAR Modeling for I(1) Data that is Not Cointegrated Assume y t =(y 1t,y 2t ) 0 be I(1) and not cointegrated. That is, y 1t and y 2t are both I(1) and there is no linear combination of y 1t and

More information

(15) since D U ( (17)

(15) since D U ( (17) 7 his is the support document for the proofs of emmas and heorems in Paper Optimal Design Of inear Space Codes For Indoor MIMO Visible ight Communications With M Detection submitted to IEEE Photonics Journal

More information

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Stéphanie Allassonnière CIS, JHU July, 15th 28 Context : Computational Anatomy Context and motivations :

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

EIE6207: Estimation Theory

EIE6207: Estimation Theory EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak.

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak. Large Sample Theory Large Sample Theory is a name given to the search for approximations to the behaviour of statistical procedures which are derived by computing limits as the sample size, n, tends to

More information

Nonparametric Modal Regression

Nonparametric Modal Regression Nonparametric Modal Regression Summary In this article, we propose a new nonparametric modal regression model, which aims to estimate the mode of the conditional density of Y given predictors X. The nonparametric

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

On the Comparison of Fisher Information of the Weibull and GE Distributions

On the Comparison of Fisher Information of the Weibull and GE Distributions On the Comparison of Fisher Information of the Weibull and GE Distributions Rameshwar D. Gupta Debasis Kundu Abstract In this paper we consider the Fisher information matrices of the generalized exponential

More information

Periodogram of a sinusoid + spike Single high value is sum of cosine curves all in phase at time t 0 :

Periodogram of a sinusoid + spike Single high value is sum of cosine curves all in phase at time t 0 : Periodogram of a sinusoid + spike Single high value is sum of cosine curves all in phase at time t 0 : X(t) = µ + Asin(ω 0 t)+ Δ δ ( t t 0 ) ±σ N =100 Δ =100 χ ( ω ) Raises the amplitude uniformly at all

More information

Chapter 3. Point Estimation. 3.1 Introduction

Chapter 3. Point Estimation. 3.1 Introduction Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models

The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models Herman J. Bierens Pennsylvania State University September 16, 2005 1. The uniform weak

More information

Theoretical Statistics. Lecture 17.

Theoretical Statistics. Lecture 17. Theoretical Statistics. Lecture 17. Peter Bartlett 1. Asymptotic normality of Z-estimators: classical conditions. 2. Asymptotic equicontinuity. 1 Recall: Delta method Theorem: Supposeφ : R k R m is differentiable

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

10. Linear Models and Maximum Likelihood Estimation

10. Linear Models and Maximum Likelihood Estimation 10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that

More information

sine wave fit algorithm

sine wave fit algorithm TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for

More information

Diagnostics can identify two possible areas of failure of assumptions when fitting linear models.

Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. 1 Transformations 1.1 Introduction Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. (i) lack of Normality (ii) heterogeneity of variances It is important

More information

1 Review of simple harmonic oscillator

1 Review of simple harmonic oscillator MATHEMATICS 7302 (Analytical Dynamics YEAR 2017 2018, TERM 2 HANDOUT #8: COUPLED OSCILLATIONS AND NORMAL MODES 1 Review of simple harmonic oscillator In MATH 1301/1302 you studied the simple harmonic oscillator:

More information

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill November

More information

Split Rank of Triangle and Quadrilateral Inequalities

Split Rank of Triangle and Quadrilateral Inequalities Split Rank of Triangle and Quadrilateral Inequalities Santanu Dey 1 Quentin Louveaux 2 June 4, 2009 Abstract A simple relaxation of two rows of a simplex tableau is a mixed integer set consisting of two

More information

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance

More information

Nonconcave Penalized Likelihood with A Diverging Number of Parameters

Nonconcave Penalized Likelihood with A Diverging Number of Parameters Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Joint work with Nottingham colleagues Simon Preston and Michail Tsagris.

Joint work with Nottingham colleagues Simon Preston and Michail Tsagris. /pgf/stepx/.initial=1cm, /pgf/stepy/.initial=1cm, /pgf/step/.code=1/pgf/stepx/.expanded=- 10.95415pt,/pgf/stepy/.expanded=- 10.95415pt, /pgf/step/.value required /pgf/images/width/.estore in= /pgf/images/height/.estore

More information

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u

More information

Progress In Electromagnetics Research, PIER 102, 31 48, 2010

Progress In Electromagnetics Research, PIER 102, 31 48, 2010 Progress In Electromagnetics Research, PIER 102, 31 48, 2010 ACCURATE PARAMETER ESTIMATION FOR WAVE EQUATION F. K. W. Chan, H. C. So, S.-C. Chan, W. H. Lau and C. F. Chan Department of Electronic Engineering

More information

SPECTRAL ANALYSIS OF NON-UNIFORMLY SAMPLED DATA: A NEW APPROACH VERSUS THE PERIODOGRAM

SPECTRAL ANALYSIS OF NON-UNIFORMLY SAMPLED DATA: A NEW APPROACH VERSUS THE PERIODOGRAM SPECTRAL ANALYSIS OF NON-UNIFORMLY SAMPLED DATA: A NEW APPROACH VERSUS THE PERIODOGRAM Hao He, Jian Li and Petre Stoica Dept. of Electrical and Computer Engineering, University of Florida, Gainesville,

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 Suggested Projects: www.cs.ubc.ca/~arnaud/projects.html First assignement on the web this afternoon: capture/recapture.

More information

On Alternating Quantum Walks

On Alternating Quantum Walks On Alternating Quantum Walks Jenia Rousseva, Yevgeniy Kovchegov Abstract We study an inhomogeneous quantum walk on a line that evolves according to alternating coins, each a rotation matrix. For the quantum

More information

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES This document is meant as a complement to Chapter 4 in the textbook, the aim being to get a basic understanding of spectral densities through

More information

CONSTRAINED AND NON-LINEAR LEAST SQUARES

CONSTRAINED AND NON-LINEAR LEAST SQUARES EE-602 STATISTICAL SIGNAL PROCESSING TERM PAPER REPORT ON CONSTRAINED AND NON-LINEAR LEAST SQUARES SUBMITTED BY: PRATEEK TAMRAKAR (Y804048) GUIDED BY: DR. RAJESH M. HEGDE SAURABH AGRAWAL (Y804058) DEPARTMENT

More information

TAIL ESTIMATION OF THE SPECTRAL DENSITY UNDER FIXED-DOMAIN ASYMPTOTICS. Wei-Ying Wu

TAIL ESTIMATION OF THE SPECTRAL DENSITY UNDER FIXED-DOMAIN ASYMPTOTICS. Wei-Ying Wu TAIL ESTIMATION OF THE SPECTRAL DENSITY UNDER FIXED-DOMAIN ASYMPTOTICS By Wei-Ying Wu A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of

More information

F & B Approaches to a simple model

F & B Approaches to a simple model A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys

More information

Optimal Estimation of a Nonsmooth Functional

Optimal Estimation of a Nonsmooth Functional Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose

More information

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture 6: Additional Results for VAR s 6.1. Confidence Intervals for Impulse Response Functions There

More information

Appendix A Conjugate Exponential family examples

Appendix A Conjugate Exponential family examples Appendix A Conjugate Exponential family examples The following two tables present information for a variety of exponential family distributions, and include entropies, KL divergences, and commonly required

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Multi-Factor Finite Differences

Multi-Factor Finite Differences February 17, 2017 Aims and outline Finite differences for more than one direction The θ-method, explicit, implicit, Crank-Nicolson Iterative solution of discretised equations Alternating directions implicit

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Testing Restrictions and Comparing Models

Testing Restrictions and Comparing Models Econ. 513, Time Series Econometrics Fall 00 Chris Sims Testing Restrictions and Comparing Models 1. THE PROBLEM We consider here the problem of comparing two parametric models for the data X, defined by

More information

On prediction and density estimation Peter McCullagh University of Chicago December 2004

On prediction and density estimation Peter McCullagh University of Chicago December 2004 On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating

More information

Time Series. Anthony Davison. c

Time Series. Anthony Davison. c Series Anthony Davison c 2008 http://stat.epfl.ch Periodogram 76 Motivation............................................................ 77 Lutenizing hormone data..................................................

More information

Some fixed point results for dual contractions of rational type

Some fixed point results for dual contractions of rational type Mathematica Moravica Vol. 21, No. 1 (2017), 139 151 Some fixed point results for dual contractions of rational type Muhammad Nazam, Muhammad Arshad, Stojan Radenović, Duran Turkoglu and Vildan Öztürk Abstract.

More information

Bessel Functions Michael Taylor. Lecture Notes for Math 524

Bessel Functions Michael Taylor. Lecture Notes for Math 524 Bessel Functions Michael Taylor Lecture Notes for Math 54 Contents 1. Introduction. Conversion to first order systems 3. The Bessel functions J ν 4. The Bessel functions Y ν 5. Relations between J ν and

More information

Specification Test for Instrumental Variables Regression with Many Instruments

Specification Test for Instrumental Variables Regression with Many Instruments Specification Test for Instrumental Variables Regression with Many Instruments Yoonseok Lee and Ryo Okui April 009 Preliminary; comments are welcome Abstract This paper considers specification testing

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology

More information

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Cun-Hui Zhang and Stephanie S. Zhang Rutgers University and Columbia University September 14, 2012 Outline Introduction Methodology

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Residual Bootstrap for estimation in autoregressive processes

Residual Bootstrap for estimation in autoregressive processes Chapter 7 Residual Bootstrap for estimation in autoregressive processes In Chapter 6 we consider the asymptotic sampling properties of the several estimators including the least squares estimator of the

More information

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring, The Finite Dimensional Normed Linear Space Theorem Richard DiSalvo Dr. Elmer Mathematical Foundations of Economics Fall/Spring, 20-202 The claim that follows, which I have called the nite-dimensional normed

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

Phasor Young Won Lim 05/19/2015

Phasor Young Won Lim 05/19/2015 Phasor Copyright (c) 2009-2015 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version

More information