On Parameter Estimation of Two Dimensional Chirp Signal
|
|
- Austin Skinner
- 5 years ago
- Views:
Transcription
1 On Parameter Estimation of Two Dimensional Chirp Signal Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract Two dimensional (-D) chirp signals occur in different areas of image processing. In this paper, we consider the least squares estimation procedures for estimating the different unknown parameters of the -D chirp signal model in presence of stationary noise. The proposed least squares estimators are strongly consistent and we obtained the asymptotic distribution of the least squares estimators. It is observed that asymptotically the least squares estimators are normally distributed. We perform some simulation experiments to observe the behavior of the least squares estimators, and it is observed that the performances are quite satisfactory. Key Words and Phrases: Chirp signals; least squares estimators; strong consistency, asymptotic distribution; linear processes. Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Pin 0806, India. Corresponding author. Phone: , Fax:
2 Introduction In this paper we consider the two dimensional (-D) chirp signal model which has the following form, y(m,n) = A 0 cos(α 0 m + β 0 m + γ 0 n + δ 0 n ) + B 0 sin(α 0 m + β 0 m + γ 0 n + δ 0 n ) + X(m,n); m =,,M; n =,,N, () here X(m,n) is stationary error, A 0 and B 0 are distinct non zero amplitudes and α 0,γ 0,β 0,δ 0 s are distinct frequencies and frequency rates respectively, and lie strictly between 0 and π. The explicit assumptions on the errors X(m,n) will be provided later. The above -D chirp signal model () is a natural generalization of the -D chirp signal model considered by several authors, see for example the article by Kundu and Nandi (008) and the references therein. The model () has quite a bit of applications in signal processing, mainly in the areas of image processing. The model () and its different variants have been used quite extensively in modeling black and white grey images, mainly the finger printing images, see for example Friedlander and Francos (996). Several others, see for example Peleg and Porat(99), Friedlander and Francos (996), Francos and Friedlander (998,999), Cao, Wang and Wang (006), Zhang and Liu(006), Zhang, Wang and Cao (008) considered similar models and discussed different estimation techniques. But, surprisingly no where the least squares estimators of the -D chirp signal have been considered, nor their properties have been discussed. The reason might be that, although it is expected that the least squares estimators will be very efficient, but deriving the properties of the least squares estimators is not simple. The main aim of this paper is to consider the least square estimators of the unknown parameters of the model () and to study their statistical properties. It is expected due to the complexity of the model that deriving the exact distribution of
3 the least squares estimators may not be possible, and therefore we mainly rely on the asymptotic results. It may be mentioned that the properties of -D chirp signal model have been discussed by Kundu and Nandi (008). They established the strong consistency and asymptotic normality properties of the least squares estimators. Unfortunately, their results cannot be used directly to establish the asymptotic properties of the least squares estimators of the -D model (). It can be easily seen similar to the -D chirp signal model that this model also dose not satisfy the sufficient conditions of Jennrich (969) and Wu(98) for the least squares estimators to be consistent and asymptotically normal. Therefore, the results of Jennrich (969) or Wu (98) cannot be used directly to establish the asymptotic properties of the least squares estimators. In this paper we establish the strong consistency and asymptotic normality properties of the least squares estimators of the unknown parameters of the model (). Interestingly, it is observed that the least squares estimators of α 0 (γ 0 ) and β 0 (δ 0 ) have the convergence rates O p (M 3/ N / ) (O p (N 3/ M / )) and O p (M 5/ N / ) (O p (N 5/ M / )) respectively. Moreover, the least squares estimators of A 0 and B 0 have the convergence rate () /. Therefore, it is clear that the convergence rates of the least squares estimators of the linear parameters are much slower than the convergence rates of the least squares estimators corresponding to the non linear parameters. We perform some extensive simulation experiments to study the effectiveness of the least squares estimators for finite samples, and it is observed that the performances of the least squares estimators are quite satisfactory. The rest of the paper is organized as follows. In Section, we provide the necessary assumptions, preliminary results and the methodology for the least squares estimators. Strong consistency and asymptotic results are established in Section 3. Extensive numerical results are presented in Section 4. Finally we conclude the paper in Section 5. Most of the proofs are provided in the Appendix. 3
4 Model Assumptions, Preliminary Results and Methodology. Model Assumptions Assumption : The error X(m,n) has the following form; X(m,n) = a(j,k)ε(m j,n k) () j= k= with a(j,k) < (3) j= k= Here ε(m, n) is a double array sequences of independent and identically distributed (i.i.d.) random variables with zero mean, finite variance σ and with finite fourth moment. Assumption : If we denote the true parameters by θ 0 = (A 0,B 0,α 0,β 0,γ 0,δ 0 ) and the parameter space by Θ = [ K,K] [ K,K] [0,π] [0,π] [0,π] [0,π] then It is assumed that θ 0 is an interior point of Θ. Moreover, α 0,β 0,γ 0,δ 0 s are distinct.. Preliminary Results We need the following results for further development. Proposition. If (ω,θ) (0,π) (0,π), then except for countable number of points lim min{m,n} lim min{m,n} n= m= n= m= cos(ωm +θn ) = cos (ωm +θn ) = lim min{m,n} lim min{m,n} n= m= sin(ωm +θn ) = 0. n= m= (4) sin (ωm +θn ) =. (5) Proof: The proof can be obtained from Vinogradov (954). 4
5 Proposition. If (ω,ω,θ,θ ) (0,π) (0,π) (0,π) (0,π), then except for countable number of points, and for s, t = 0,,, lim min{m,n} lim min{m,n} lim min{m,n} lim min{m,n} M (s+) N (t+) M (s+) N (t+) n= m= n= m= n= m= n= m= cos(ω m + ω m + θ n + θ n ) = 0 sin(ω m + ω m + θ n + θ n ) = 0. m s n t cos (ω m + ω m + θ n + θ n ) = m s n t sin (ω m + ω m + θ n + θ n ) = (s + )(t + ) (s + )(t + ). Proof: They can be obtained from Proposition. Lemma. If X(m,n) satisfies the Assumptions &, then as min{m,n}, sup X(m,n)e i(αm+βm +γn+δn ) 0 a.s. α,β,γ,δ n= m= Proof: See in the Appendix. Lemma. If X(m, n) satisfies Assumptions &, then as min{m, N}, and for s,t = 0,,, sup M s+ N t+ α,β,γ,δ m s n t X(m,n)e i(αm+βm +γn+δn ) n= m= 0 a.s. Proof: It follows along the same line as Lemma..3 Methodology We will use the following notations; A φ = B 5
6 and cos(α + β + γ + δ) sin(α + β + γ + δ) cos(α + 4β + γ + δ) sin(α + 4β + γ + δ).. cos(mα + M β + γ + δ) sin(mα + M β + γ + δ) W(α,β,γ,δ) =.. cos(α + β + Nγ + N δ) sin(α + β + Nγ + N δ) cos(α + 4β + Nγ + N δ) sin(α + 4β + Nγ + N δ).. cos(mα + M β + Nγ + N δ) sin(mα + M β + Nγ + N δ), and Y is the data vector defined as follows: Y = (y(, ),,y(m, ),,y(,n),,y(m,n)) T. (6) For θ = (A,B,α,β,γ,δ), we want to minimize Q(θ) = (Y W(α,β,γ,δ)φ) T (Y W(α,β,γ,δ)φ) (7) with respect to θ. Now using separable regression technique of Richards (96), it can be seen that for fixed (α,β,γ,δ), the minimization of Q(θ) with respect to A and B can be obtained as ] [Â(α,β,γ,δ) φ(α,β,γ,δ) = B(α,β,γ,δ) = (W(α,β,γ,δ) T W(α,β,γ,δ)) W(α,β,γ,δ) T Y. Therefore, the minimization of Q(θ) can be obtained by minimizing R(α,β,γ,δ) = Y T (I P(α,β,γ,δ))Y with respect to (α,β,γ,δ), where P(α,β,γ,δ) = W(α,β,γ,δ)(W(α,β,γ,δ) T W(α,β,γ,δ)) W(α,β,γ,δ) T is the projection matrix on the column space of W(α,β,γ,δ). If ( α, β, γ, δ) minimizes R(α,β,γ,δ), the least squares estimates of A and B can be obtained as  = Â( α, β, γ, δ) and B = B( α, β, γ, δ). 6
7 We will use θ = (Â, B, α, β, γ, δ). Note that by using the separable regression technique, the least squares estimators of the unknown parameters of the model () can be obtained by solving a four dimensional optimization problem, rather than a six dimensional optimization problem. 3 Asymptotic Properties of the Least Squares Estimators 3. Consistency of the Least Squares Estimators In this section we provide the consistency results of the estimators. Theorem. If the Assumptions & are satisfied then θ, the least squares estimators of θ 0, is a strongly consistent estimator of θ 0. Proof: See in the Appendix. The following result might be useful for error analysis of the model, or it may have some independent interests also. Lemma 3. If the Assumptions & are satisfied, then M( α α 0 ) 0 a.s., M ( β β 0 ) 0 a.s., N( γ γ 0 ) 0 a.s., N ( δ δ 0 ) 0 a.s. Proof: See in the Appendix. Using Lemma 3, we immediately obtain  = A 0 + o() a.s., B = B 0 + o() a.s., α = α 0 + o(m) a.s., β = β 0 + o(m ) a.s., γ = γ 0 + o(n) a.s., δ = δ 0 + o(n ) a.s., 7
8 So we get, ŷ(m,n) = Â cos( αm + βm + γn + δn ) + B sin( αm + βm + γn + δn ) (8) = A 0 cos(α 0 n + β 0 n + γ 0 n + δ 0 n ) + B 0 sin(α 0 n + β 0 n + γ 0 n + δ 0 n ) + o() a.s. which gives, y(m,n) ŷ(m,n) = X(m,n) + o() a.s. (9) Therefore, clearly (9) can be used for checking the error assumptions. 3. Asymptotic normality of the estimators In this section we will provide the asymptotic normal distribution for the estimators. Theorem. If the Assumptions & are satisfied, then (ˆθ θ 0 )D N 6 (0, cσ Σ ) where the matrix D is a 6 6 diagonal matrix as follows: c = ( ) D = diag M N,M N,M 3 N,M 5 N,M N 3,M N 5, j= k= Σ = B A0 B 0 Σ = A0 3 B 0 A B 0 A 0 + B 0 A0 B 0 A0 3 3 a(j,k) and A 0 +B 0 A 0 +B 0 A 0 +B 0 4 A 0 +B B 0 3 A0 A 0 +B 0 4 A 0 +B 0 5 A 0 +B A 0 +B 0 5 B 0 A0 3 A 0 +B 0 4 A 0 +B A 0 +B 0 3 A 0 +B 0 4 B 0 3 A0 3 A 0 +B A 0 +B 0 5 A 0 +B 0 4 A 0 +B 0 5 A 0 +7B 0 8A 0 B 0 8B 0 5B 0 8B 0 5B 0 8A 0 B 0 7A0 +B 0 8A 0 5A 0 8A 0 5A 0 8B 0 8A B 0 5A B 0 8A B 0 5A (0) () Proof: See in the Appendix. 8
9 4 Simulations and Data Analysis 4. Simulations We perform some simulation experiments to see how the proposed least squares estimators behave for different sample sizes and for different error structures. We consider the model () with following parameter values: A 0 = 5.0, B 0 = 5.0, α 0 =.0, β 0 = 0.05, γ 0 =.5,δ 0 = 0.5. () We have taken two different error structures namely; Error-I: X(m, n) = ε(m, n); (3) Error-II: X(m,n) = ε(m,n) + 0.5ε(m,n) ε(m,n ); (4) where ε(m,n)s are i.i.d. Gaussian random variables with mean 0 and variance σ. We have considered different sets of sample sizes and different σ for our simulation experiments. We have used the random number generator RAN of Press et al. (99) for generating the uniform random numbers. In each case the least squares estimators of the unknown parameters are obtained by using the Downhill Simplex Algorithm, see for example Press et. al. (99), whereas, the initial guesses are obtained by using four dimensional grid search method. In each case we computed the least squares estimators, and obtained the average estimates, mean squared errors and variances over 000 replications. We report the true parameter values (PARA), the average estimates (MEAN), the associated mean square errors (MSE), variances(var). For comparison purposes we report the asymptotic variances (ASYV) also obtained using Theorem. All the results are reported in Tables -4. Some of the points are quite clear from these tables. First of all, it is observed that as the error variances decrease or the sample size increases the variances and the mean 9
10 Table : The MEAN, MSE, VAR and ASYV of the least squares estimators when σ = 0.05 and for Error-I M=N=50 PARA MEAN MSE ( 0.38E-0) ( E-0) ( 0.43E-06) ( E-0) ( 0.385E-06) ( 0.305E-09) VAR ( 0.365E-0) ( E-0) ( 0.4E-06) ( E-0) ( E-06) ( 0.97E-09) ASYV ( E-03) ( E-03) ( E-07) ( E-0) ( E-07) ( E-0) M=N=75 PARA MEAN MSE ( 0.434E-03) ( E-03) ( E-07) ( E-) ( E-07) ( E-) VAR ( 0.435E-03) ( E-03) ( E-07) ( 0.64E-) ( 0.377E-07) ( E-) ASYV ( E-03) ( E-03) ( 0.36E-07) ( 0.07E-) ( 0.36E-07) ( 0.0E-) Table : The MEAN, MSE, VAR and ASYV of the least squares estimators when σ = 0.5 and for Error-I M=N=50 PARA MEAN MSE ( 0.486E-0) ( E-0) ( 0.333E-05) ( 0.406E-09) ( E-05) ( 0.873E-08) VAR ( 0.485E-0) ( E-0) ( 0.80E-05) ( 0.406E-09) ( E-05) ( 0.87E-08) ASYV ( E-0) ( E-0) ( E-06) ( E-09) ( E-06) ( E-09) M=N=75 PARA MEAN MSE ( E-0) ( E-0) ( 0.757E-06) ( E-0) ( E-06) ( E-0) VAR ( 0.305E-0) ( E-0) ( E-06) ( 0.468E-0) ( E-06) ( E-0) ASYV ( E-0) ( E-0) ( 0.36E-06) ( 0.07E-0) ( 0.36E-06) ( 0.07E-0) Table 3: The MEAN, MSE, VAR and ASYV of the least squares estimators when σ = 0.05 and for Error-II M=N=50 PARA MEAN MSE ( E-03) ( 0.686E-03) ( 0.59E-06) ( E-0) ( 0.5E-06) ( E-0) VAR ( E-03) ( E-03) ( 0.565E-06) ( E-0) ( 0.50E-06) ( E-0) ASYV ( E-03) ( E-03) ( E-07) ( E-0) ( E-07) ( E-0) M=N=75 PARA MEAN MSE ( E-03) ( E-03) ( 0.444E-07) ( E-) ( E-07) ( E-) VAR ( E-03) ( E-03) ( E-07) ( E-) ( E-07) ( E-) ASYV ( E-03) ( E-03) ( 0.649E-07) ( E-) ( 0.649E-07) ( E-) 0
11 Table 4: The MEAN, MSE, VAR and ASYV of the least squares estimators when σ = 0.5 and for Error-II M=N=50 PARA MEAN MSE ( E-0) ( E-0) ( 0.444E-05) ( E-09) ( 0.94E-05) ( E-09) VAR ( E-0) ( E-0) ( E-05) ( E-09) ( 0.900E-05) ( E-09) ASYV ( E-0) ( E-0) ( E-06) ( E-09) ( E-06) ( E-09) M=N=75 PARA MEAN MSE ( 0.446E-0) ( 0.446E-0) ( E-06) ( 0.636E-0) ( E-06) ( E-0) VAR ( E-0) ( 0.446E-0) ( E-06) ( 0.63E-0) ( E-06) ( 0.535E-0) ASYV ( E-0) ( E-0) ( 0.649E-06) ( E-0) ( 0.649E-06) ( E-0) squared errors decrease in all the cases considered, as expected. The simulation results show that the maximum likelihood estimates are quite close to the true parameter values. For both the error structures it is observed that the mean squared errors of the least squares estimators match quite well with the corresponding asymptotic variances. It indicates that the asymptotic results work quite well even for moderate sample sizes. 4. Data Analysis For illustrative purposes, mainly to show how the proposed method can be implemented in practice we have analyzed one simulated data set obtained from the model (). We have used the following parameter values: A 0 = 5.0, B 0 =.0, α 0 =.55, β 0 = 0.05, γ 0 =.5, δ 0 = X(n)s are as follows, X(m,n) = ǫ(m,n) + 0.5ǫ(m,n) ǫ(m,n ) + 0.ǫ(m,n ) where ǫ s are assumed to be i.i.d. Gaussian random variables with mean 0 and variance σ =.5. We have generated the data y(m,n) for M = N = 00, and the data are plotted in Figure. Figure represents the -D image plot of a simulated noise corrupted y(m,n), whose gray level at (m,n) is proportional to the value of y(m,n).
12 The problem is to extract the regular texture, see Figure, from the contaminated one Figure : Noisy signal We use the least squares technique and estimate the unknown parameters, and they are as follow: Â = , B = , α =.5498, β = , γ =.5085, δ = The estimated y(m,n), namely ŷ(m,n) as in (8) are plotted in Figure 3. It is clear that the original and the estimated plots match quite well. 5 Conclusion In this paper we consider the -D chirp signal model and study the properties of the least squares estimators of the unknown parameters. It has been proved that the least squares estimators are strongly consistent and asymptotically normally distributed. It is observed that the least squares estimators can be obtained as a four-dimensional optimization problem. Our simulation results suggest that the asymptotic properties
13 Figure : True signal of the least squares estimators can be used quite effectively even for moderate sample sizes. There are several open issues and generalizations which are of interests for future work. For example, it is observed that the least squares estimators can be obtained using a four dimensional optimization problem. It will be interesting to develop some numerically efficient algorithm to find a solution of this optimization problem. One possible generalization can be a multiple chirp signal model, of course finding the least squares estimators and deriving their properties will be a challenging problem. 6 Acknowledgement Part of the work of the first author is financially supported by the Center for Scientific and Industrial Research (CSIR). The Department of Science and Technology, Government of India has partially supported the work of the second and third authors. 3
14 Figure 3: Estimated signal Appendix Proof of Lemma- We will use the following results to prove Lemma : Result : For fixed j,k =,,,N, and for fixed l,t =,,,N, using Holder s inequality we have N t+k M l+j E ε(m,n)ε(m j,n k)ε(m + l,n + t)ε(m j + l,n k + t) n= m= ( N t+k E = [ E n= N t+k n= = O() M l+j m= M l+j m= ε(m,n)ε(m j,n k)ε(m + l,n + t)ε(m j + l,n k + t) ) (ε(m,n)ε(m j,n k)ε(m + l,n + t)ε(m j + l,n k + t)) ] The equality at the third step of the above expression holds as contribution over cross product terms is zero. 4
15 Result : If we denote q(θ,m,n) = αm + βm + γn + δn, then for a given t, again using Holder s inequality, we have; E sup ε(m,n)ε(m j,n k)e itq(θ,m,n) θ n= m= ( ) E sup ε(m,n)ε(m j,n k)e itq(θ,m,n) θ = [ [ E sup E θ n= m= ( N n= m= n= m= ) ( N ε(m,n)ε(m j,n k)e itq(θ,m,n) ε(m,n) ε(m j,n k) n= m= N + E ε(m,n)ε(m,n + )ε(m j,n k)ε(m j,n k + ) n= m= M + E ε(m,n)ε(m +,n)ε(m j,n k)ε(m j +,n k) n= m= + + E ε(m, )ε(m,n)ε(m j, k)ε(m j,n k) m= ] +E ε(,n)ε(m,n)ε( j,n k)ε(m j,n k) n= = O( +.() ) = O(() 3 4 ) ε(m,n)ε(m j,n k)e itq(θ,m,n) )] 5
16 Let m j = m,n k = n. Then E sup X(m,n)e iq(θ,m,n) θ n= m= = E sup a(j,k)ε(m j,n k)e iq(θ,m,n) θ n= m= k= j= = E sup a(j, k) ε(m,n )e iq(θ,m,n) θ k= j= n= m= E sup θ a(j,k) ε(m,n )e iq(θ,m,n) k= j= n= m= = E sup a(j, k) ε(m,n )e iq(θ,m,n) θ k= j= n= m= [ ] a(j, k) E sup ε(m,n )e iq(θ,m,n) θ k= j= n= m= a(j, k) E sup ε(m,n )e iq(θ,m,n) θ = k= j= k= j= [ E sup θ k= j= a(j,k) ( N n= m= n= m= ) ( N ε(m,n )e iq(θ,m,n) a(j,k) [ E n= m= n= m= ε(m,n ) ε(m,n )e iq(θ,m,n) )] N +E ε(m,n )ε(m,n + ) + E M ε(m,n )ε(m +,n ) n= m= n= m= ] + + E ε(m, )ε(m,n) +E ε(,n )ε(m,n ) m= n= = O( +.()3 4 ) = O(() 8 ) Therefore, E sup α,β,γ,δ () 9 N 9 M 9 n= m= X(m,n)e iq(θ,m,n) O(() 9 8 ) (5) 6
17 Take and for ǫ > 0 Z(M,N) = sup α,β,γ,δ () 9 N 9 M 9 n= m= X(m,n)e iq(θ,m,n) N= M= P(Z(M,N) > ǫ) N= M= EZ(M,N) ǫ N= M= O(() 9 8) ǫ < Therefore, by Borel Cantelli lemma we have N 9 M 9 sup X(m,n)e iq(θ,m,n) () 9 α,β,γ,δ n= m= 0 a.s. Denote, {(J,K) : N 9 < K (N + ) 9, M 9 < J (M + ) 9 } = S JK Define, U(M,N) = sup sup α,β,γ,δ S JK () 9 = sup sup α,β,γ,δ S JK () 9 + K J () 9 n= m= sup N 9 α,β,γ,δ () 9 N 9 M 9 n= m= N 9 M 9 n= m= n= m= X(m,n)e iq(θ,m,n) JK X(m,n)e iq(θ,m,n) X(m,n)e iq(θ,m,n) JK M 9 X(m,n)e iq(θ,m,n) ( + () 9 (M + ) 9 (N + ) 9) V (M,N) + W(M,N), K () 9 K n= m= (N+) 9 n= (N+) 9 n= n= m= J X(m,n)e iq(θ,m,n) K J n= m= J X(m,n)e iq(θ,m,n) (M+) 9 m= (M+) 9 m= X(m,n)e iq(θ,m,n) X(m,n)e iq(θ,m,n) X(m,n)e iq(θ,m,n) where V (M,N) = sup α,β,γ,δ (N+) 9 M 9 n=n 9 + m= + sup α,β,γ,δ X(m,n)e iq(θ,m,n) (N+) 9 (M+) 9 n=n 9 + m=m sup N 9 α,β,γ,δ X(m,n)e iq(θ,m,n) (M+) 9 n= m=m 9 +, X(m,n)e iq(θ,m,n) W(M,N) = ( ) () 9 (M + ) 9 (N + ) 9 7 sup α,β,γ,δ (N+) 9 n= (M+) 9 m= X(m,n)e iq(θ,m,n).
18 We have and Therefore, E sup We also have α,β,γ,δ n=n 0 + m=m 0 + P(V (M,N) > ǫ) N= M= X(m,n)e iq(θ,m,n) O((M m 0 )(N n 0 )) 7 8 (6) EV (M,N) ǫ P(V (M,N) > ǫ) 7 O((M9N8) 8 +(M 8 N 9 ) 8 7 ) () 9 N= M= ǫ O(() 9 8) ǫ <.. (7) P(W(M,N) > ǫ) EW(M,N) ǫ O(( + 9 )(M + ) 8 (N + ) 9 8 ) M N ǫ Therefore, N= M= P(W(M,N) > ǫ) N= M= O(() 9 8) ǫ < Now by Borel Cantelli lemma we get U(M,N) 0 a.s.. Proof of Lemma : It can be obtained along the same line. To prove the Theorem- we need the following Lemma. Lemma 4. Let θ be the least squares estimator of θ 0, and consider the set S c = {θ : θ Θ; A A 0 c, B B 0 c, α α 0 c, β β 0 c, γ γ 0 c, δ δ 0 c}. If for any c > 0, lim inf inf θ S c (Q(θ) Q(θ0 )) > 0 a.s. then ˆθ θ 0 a.s.. Here the function Q(θ) is same as defined in (7). Proof of Lemma 4: The proof can be obtained along the same line as in Wu (98), so it is avoided. Proof of Theorem : To prove Theorem, it is enough to prove that lim inf inf θ S c (Q(θ) Q(θ0 )) > 0 a.s. 8
19 Observe that where [Q(θ) Q(θ0 )] = f(θ) + g(θ), f(θ) = g(θ) = n= m= [ A cos(αm + βm + γn + δn ) + B sin(αm + βm + γn + δn ) A 0 cos(α 0 m + β 0 m + γ 0 n + δ 0 n ) B 0 sin(α 0 m + β 0 m + γ 0 n + δ 0 n ) ] X(n) [ A 0 cos(α 0 m + β 0 m + γ 0 n + δ 0 n ) + B 0 sin(α 0 m + β 0 m + γ 0 n + δ 0 n ) n= A cos(αm + βm + γn + δn ) B sin(αm + βm + γn + δn ) ] Now g(θ) is going to zero a.s. because of Lemma-. We now observe that S c S A c S B c S α c S β c S γ c S δ c, where S A c = {θ;θ Θ, A A 0 c}, and the other sets are also similarly defined. It can be shown along the same line as in Kundu (997) that lim inf inf S ξ c and hence the result is proved. f(θ) > 0, a.s. for ξ = A,B,α,β,γ,δ Proof of Lemma 3: Let us denote Q (θ) as the 6 first derivative matrix and Q (θ) as the 6 6 second derivative matrix. Now using multivariate Taylor series expansion of Q (ˆθ) around θ 0 and we get Q ( θ) Q (θ 0 ) = ( θ θ 0 )Q ( θ) (8) where θ is a point on line joining ˆθ and θ 0. Since, Q (ˆθ) = 0, then for the diagonal matrix D, same as defined in Theorem, we obtain Q (θ 0 )D = (ˆθ θ 0 )D [DQ ( θ)d] (9) which gives, (ˆθ θ 0 )D = [ Q (θ 0 )D][DQ ( θ)d] (0) 9
20 Dividing by the expression becomes (ˆθ θ 0 )( D) = [ Q (θ 0 )D][DQ ( θ)d] () Since, θ θ 0 a.s., θ θ 0 a.s.. Therefore, [ DQ ( θ)d ] [ DQ (θ 0 )D ] Moreover, using Lemma and Proposition, So, Hence using (3) we get Q (θ 0 )D 0 a.s. () ( θ θ 0 )( D) 0 a.s. (3) M(ˆα α 0 ) 0 a.s., M (ˆβ β 0 ) 0 a.s., N(ˆγ γ 0 ) 0 a.s., N (ˆδ δ 0 ) 0 a.s. Proof of Theorem : Note that [ Q (θ 0 )D ] T = N M n= N M n= N N n= N N n= N M n= m= cos(q(θ0,m,n))x(m,n) N M n= m= sin(q(θ0,m,n))x(m,n) M m= m[a0 sin(q(θ 0,m,n)) B 0 cos(q(θ 0,m,n))] X(m,n) M m= m [A 0 sin(q(θ 0,m,n)) B 0 cos(q(θ 0,m,n))] X(m,n) M m= n[a0 sin(q(θ 0,m,n)) B 0 cos(q(θ 0,m,n))] X(m,n) M m= n [A 0 sin(q(θ 0,m,n)) B 0 cos(q(θ 0,m,n))] X(m,n). Now using Central Limit Theorem of linear processes, see Fuller (996,), page 39, it follows that [ Q (θ 0 )D ] T N6 (0, cσ Σ). (4) Since [ DQ (θ 0 )D ] Σ we immediately get ( θ θ 0 )D N 6 (0, cσ Σ ). 0
21 References [] Cao, F.,Wang, S., Wang, F. (006), Cross-Spectral Method Based on -D Cross Polynomial Transform for -D Chirp Signal Parameter Estimation ICSP006 Proceedings, DOI:0.09/ICOSP [] Fuller, W. A. (996), Introduction to Statistical Time Series, second Edition, John Wiley and Sons, New York. [3] Francos, J.M., Friedlander, B. (998), Two-Dimensional Polynomial Phase Signals: Parameter Estimation and Bounds Multidimensional Systems and Signal Processing, 9, [4] Francos, J.M., Friedlander, B. (999), Parameter Estimation of -D Random Amplitude Polynomial-Phase Signals, IEEE Transactions on Signal Processing, 47, [5] Friedlander, B.,Francos, J.M. (996), An Estimation Algorithm for -D Polynomial Phase Signals, IEEE Transactions on Image Processing, 5, [6] Jennrich, R.I. (969), Asymptotic properties of non-linear least square estimators, The Annals of Mathematical Statistics, 40, [7] Kundu, D. (997), Asymptotic properties of the least squares estimators of sinusoidal signals, Statistics, 30, [8] Kundu, D. and Nandi, S. (008), Parameter estimation of chirp signals in presence of stationary noise, Statistica Sinica, 8, [9] Nandi, S. and Kundu, D. (004), Asymptotic properties of the least squares estimators of the parameters of the chirp signals, Annals of the Institute of Statistical Mathematics, 56,
22 [0] Peleg, S., Porat, B. (99), Estimation and classification of polynomial phase signals IEEE Transactions on Information Theory, 37, [] Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P. (996) Numerical Recipes in Fortran 90, Second Edition, Cambridge University Press. Cambridge. [] Richards, F.S.G. (96), A method of maximum likelihood estimation, Journal of Royal Statistical Society, Ser B, [3] Vinogradov, I.M. (954), The method of trigonometrical sums in the theory of numbers, Interscience, Translated from Russian. Revised and annotated by K. F. Roth and Anne Davenport. Reprint of the 954 translation. Dover Publications, Inc., Mineola, NY, 004. [4] Wu, C. F. (98), Asymptotic theory of non-linear least-squares estimators, Annals of Statistics, 9, [5] Zhang, H., Liu, Q. (006) Estimation of instantaneous frequency rate for multicomponent polynomial phase signals, ICSP006 Proceedings, , DOI: 0.09/ICOSP [6] Zhang, K., Wang, S., Cao, F. (008), Product Cubic Phase Function Algorithm for Estimating the Instantaneous Frequency Rate of Multicomponent Twodimensional Chirp Signals, 008 Congress on Image and Signal Processing, DOI: 0.09/CISP
Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal
Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Rhythm Grover, Debasis Kundu,, and Amit Mitra Department of Mathematics, Indian Institute of Technology Kanpur,
More informationOn Least Absolute Deviation Estimators For One Dimensional Chirp Model
On Least Absolute Deviation Estimators For One Dimensional Chirp Model Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract It is well known that the least absolute deviation (LAD) estimators are more
More informationEFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL
EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 Abstract. Chirp signals play an important role in the statistical signal processing.
More informationAn Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals
isid/ms/8/ November 6, 8 http://www.isid.ac.in/ statmath/eprints An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals Swagata Nandi Anurag Prasad Debasis
More informationPARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE
Statistica Sinica 8(008), 87-0 PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Debasis Kundu and Swagata Nandi Indian Institute of Technology, Kanpur and Indian Statistical Institute
More informationPARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE
PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE DEBASIS KUNDU AND SWAGATA NANDI Abstract. The problem of parameter estimation of the chirp signals in presence of stationary noise
More informationOn Two Different Signal Processing Models
On Two Different Signal Processing Models Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 15, 2015 Outline First Model 1 First Model 2 3 4 5 6 Outline First Model 1
More informationESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL
1 ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL SWAGATA NANDI 1 AND DEBASIS KUNDU Abstract. In this paper, we propose a modification of the multiple sinusoidal model such that periodic
More informationAn Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals
An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals Swagata Nandi 1 Debasis Kundu Abstract A computationally efficient algorithm is proposed for estimating the parameters
More informationAsymptotic properties of the least squares estimators of a two dimensional model
Metrika (998) 48: 83±97 > Springer-Verlag 998 Asymptotic properties of the least squares estimators of a two dimensional model Debasis Kundu,*, Rameshwar D. Gupta,** Department of Mathematics, Indian Institute
More informationAmplitude Modulated Model For Analyzing Non Stationary Speech Signals
Amplitude Modulated Model For Analyzing on Stationary Speech Signals Swagata andi, Debasis Kundu and Srikanth K. Iyer Institut für Angewandte Mathematik Ruprecht-Karls-Universität Heidelberg Im euenheimer
More informationASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS
ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS Debasis Kundu Department of Mathematics Indian Institute of Technology Kanpur Kanpur, Pin 20806 India Abstract:
More informationThe Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models
The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models Herman J. Bierens Pennsylvania State University September 16, 2005 1. The uniform weak
More informationEstimating Periodic Signals
Department of Mathematics & Statistics Indian Institute of Technology Kanpur Most of this talk has been taken from the book Statistical Signal Processing, by D. Kundu and S. Nandi. August 26, 2012 Outline
More informationLIST OF PUBLICATIONS
LIST OF PUBLICATIONS Papers in referred journals [1] Estimating the ratio of smaller and larger of two uniform scale parameters, Amit Mitra, Debasis Kundu, I.D. Dhariyal and N.Misra, Journal of Statistical
More informationAnalysis of Type-II Progressively Hybrid Censored Data
Analysis of Type-II Progressively Hybrid Censored Data Debasis Kundu & Avijit Joarder Abstract The mixture of Type-I and Type-II censoring schemes, called the hybrid censoring scheme is quite common in
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationDetermination of Discrete Spectrum in a Random Field
58 Statistica Neerlandica (003) Vol. 57, nr., pp. 58 83 Determination of Discrete Spectrum in a Random Field Debasis Kundu Department of Mathematics, Indian Institute of Technology Kanpur, Kanpur - 0806,
More informationNoise Space Decomposition Method for two dimensional sinusoidal model
Noise Space Decomposition Method for two dimensional sinusoidal model Swagata Nandi & Debasis Kundu & Rajesh Kumar Srivastava Abstract The estimation of the parameters of the two dimensional sinusoidal
More informationMOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES
J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary
More informationOn lower and upper bounds for probabilities of unions and the Borel Cantelli lemma
arxiv:4083755v [mathpr] 6 Aug 204 On lower and upper bounds for probabilities of unions and the Borel Cantelli lemma Andrei N Frolov Dept of Mathematics and Mechanics St Petersburg State University St
More informationA New Two Sample Type-II Progressive Censoring Scheme
A New Two Sample Type-II Progressive Censoring Scheme arxiv:609.05805v [stat.me] 9 Sep 206 Shuvashree Mondal, Debasis Kundu Abstract Progressive censoring scheme has received considerable attention in
More informationAnalysis of Middle Censored Data with Exponential Lifetime Distributions
Analysis of Middle Censored Data with Exponential Lifetime Distributions Srikanth K. Iyer S. Rao Jammalamadaka Debasis Kundu Abstract Recently Jammalamadaka and Mangalam (2003) introduced a general censoring
More informationThe properties of L p -GMM estimators
The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion
More information1 Introduction. 2 Binary shift map
Introduction This notes are meant to provide a conceptual background for the numerical construction of random variables. They can be regarded as a mathematical complement to the article [] (please follow
More informationEconomics 583: Econometric Theory I A Primer on Asymptotics
Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:
More informationAsymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University
More informationStatistical Properties of Numerical Derivatives
Statistical Properties of Numerical Derivatives Han Hong, Aprajit Mahajan, and Denis Nekipelov Stanford University and UC Berkeley November 2010 1 / 63 Motivation Introduction Many models have objective
More informationOn the Comparison of Fisher Information of the Weibull and GE Distributions
On the Comparison of Fisher Information of the Weibull and GE Distributions Rameshwar D. Gupta Debasis Kundu Abstract In this paper we consider the Fisher information matrices of the generalized exponential
More informationA NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES
Bull. Korean Math. Soc. 52 (205), No. 3, pp. 825 836 http://dx.doi.org/0.434/bkms.205.52.3.825 A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Yongfeng Wu and Mingzhu
More informationERDŐS AND CHEN For each step the random walk on Z" corresponding to µn does not move with probability p otherwise it changes exactly one coor dinate w
JOURNAL OF MULTIVARIATE ANALYSIS 5 8 988 Random Walks on Z PAUL ERDŐS Hungarian Academy of Sciences Budapest Hungary AND ROBERT W CHEN Department of Mathematics and Computer Science University of Miami
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationDiscriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions
Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Arabin Kumar Dey & Debasis Kundu Abstract Recently Kundu and Gupta ( Bivariate generalized exponential distribution,
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationBayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring
Bayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring Debasis KUNDU Department of Mathematics and Statistics Indian Institute of Technology Kanpur Pin
More informationQuadrature Filters for Maneuvering Target Tracking
Quadrature Filters for Maneuvering Target Tracking Abhinoy Kumar Singh Department of Electrical Engineering, Indian Institute of Technology Patna, Bihar 813, India, Email: abhinoy@iitpacin Shovan Bhaumik
More informationMMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only
MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University
More informationECE531 Lecture 10b: Maximum Likelihood Estimation
ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So
More informationComplete Moment Convergence for Weighted Sums of Negatively Orthant Dependent Random Variables
Filomat 31:5 217, 1195 126 DOI 1.2298/FIL175195W Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Complete Moment Convergence for
More informationBahadur representations for bootstrap quantiles 1
Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by
More informationSubmitted to the Brazilian Journal of Probability and Statistics
Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a
More informationESTIMATION OF NONLINEAR BERKSON-TYPE MEASUREMENT ERROR MODELS
Statistica Sinica 13(2003), 1201-1210 ESTIMATION OF NONLINEAR BERKSON-TYPE MEASUREMENT ERROR MODELS Liqun Wang University of Manitoba Abstract: This paper studies a minimum distance moment estimator for
More information10. Linear Models and Maximum Likelihood Estimation
10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that
More informationThe Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals
The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals Timo Hämäläinen Seppo Pohjolainen Tampere University of Technology Department of Mathematics
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationWittmann Type Strong Laws of Large Numbers for Blockwise m-negatively Associated Random Variables
Journal of Mathematical Research with Applications Mar., 206, Vol. 36, No. 2, pp. 239 246 DOI:0.3770/j.issn:2095-265.206.02.03 Http://jmre.dlut.edu.cn Wittmann Type Strong Laws of Large Numbers for Blockwise
More informationA PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS
Statistica Sinica 20 2010, 365-378 A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Liang Peng Georgia Institute of Technology Abstract: Estimating tail dependence functions is important for applications
More informationAsymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data
Sri Lankan Journal of Applied Statistics (Special Issue) Modern Statistical Methodologies in the Cutting Edge of Science Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations
More informationGENERATION OF COLORED NOISE
International Journal of Modern Physics C, Vol. 12, No. 6 (2001) 851 855 c World Scientific Publishing Company GENERATION OF COLORED NOISE LORENZ BARTOSCH Institut für Theoretische Physik, Johann Wolfgang
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationCharacter sums with Beatty sequences on Burgess-type intervals
Character sums with Beatty sequences on Burgess-type intervals William D. Banks Department of Mathematics University of Missouri Columbia, MO 65211 USA bbanks@math.missouri.edu Igor E. Shparlinski Department
More informationUnit Roots in White Noise?!
Unit Roots in White Noise?! A.Onatski and H. Uhlig September 26, 2008 Abstract We show that the empirical distribution of the roots of the vector auto-regression of order n fitted to T observations of
More informationAsymptotic inference for a nonstationary double ar(1) model
Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk
More informationEstimation of parametric functions in Downton s bivariate exponential distribution
Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr
More informationModeling and testing long memory in random fields
Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous
More informationFAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE
Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology
More informationExact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring
Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme
More informationMaximal inequalities and a law of the. iterated logarithm for negatively. associated random fields
Maximal inequalities and a law of the arxiv:math/0610511v1 [math.pr] 17 Oct 2006 iterated logarithm for negatively associated random fields Li-Xin ZHANG Department of Mathematics, Zhejiang University,
More informationNotes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed
18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,
More informationEstimation for Mean and Standard Deviation of Normal Distribution under Type II Censoring
Communications for Statistical Applications and Methods 2014, Vol. 21, No. 6, 529 538 DOI: http://dx.doi.org/10.5351/csam.2014.21.6.529 Print ISSN 2287-7843 / Online ISSN 2383-4757 Estimation for Mean
More informationAsymptotically sufficient statistics in nonparametric regression experiments with correlated noise
Vol. 0 0000 1 0 Asymptotically sufficient statistics in nonparametric regression experiments with correlated noise Andrew V Carter University of California, Santa Barbara Santa Barbara, CA 93106-3110 e-mail:
More informationFOURIER TRANSFORMS OF STATIONARY PROCESSES 1. k=1
FOURIER TRANSFORMS OF STATIONARY PROCESSES WEI BIAO WU September 8, 003 Abstract. We consider the asymptotic behavior of Fourier transforms of stationary and ergodic sequences. Under sufficiently mild
More information392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions
392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6 Problem Set 2 Solutions Problem 2.1 (Cartesian-product constellations) (a) Show that if A is a K-fold Cartesian
More informationActa Mathematica Academiae Paedagogicae Nyíregyháziensis 32 (2016), ISSN
Acta Mathematica Academiae Paedagogicae Nyíregyháziensis 3 06, 3 3 www.emis.de/journals ISSN 786-009 A NOTE OF THREE PRIME REPRESENTATION PROBLEMS SHICHUN YANG AND ALAIN TOGBÉ Abstract. In this note, we
More informationFinite Population Sampling and Inference
Finite Population Sampling and Inference A Prediction Approach RICHARD VALLIANT ALAN H. DORFMAN RICHARD M. ROYALL A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane
More informationMinimax design criterion for fractional factorial designs
Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:
More informationOn the convergence of uncertain random sequences
Fuzzy Optim Decis Making (217) 16:25 22 DOI 1.17/s17-16-9242-z On the convergence of uncertain random sequences H. Ahmadzade 1 Y. Sheng 2 M. Esfahani 3 Published online: 4 June 216 Springer Science+Business
More informationGaussian Estimation under Attack Uncertainty
Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary
More informationSpatial autoregression model:strong consistency
Statistics & Probability Letters 65 (2003 71 77 Spatial autoregression model:strong consistency B.B. Bhattacharyya a, J.-J. Ren b, G.D. Richardson b;, J. Zhang b a Department of Statistics, North Carolina
More informationClosest Moment Estimation under General Conditions
Closest Moment Estimation under General Conditions Chirok Han and Robert de Jong January 28, 2002 Abstract This paper considers Closest Moment (CM) estimation with a general distance function, and avoids
More informationSome limit theorems on uncertain random sequences
Journal of Intelligent & Fuzzy Systems 34 (218) 57 515 DOI:1.3233/JIFS-17599 IOS Press 57 Some it theorems on uncertain random sequences Xiaosheng Wang a,, Dan Chen a, Hamed Ahmadzade b and Rong Gao c
More informationDiscussion of High-dimensional autocovariance matrices and optimal linear prediction,
Electronic Journal of Statistics Vol. 9 (2015) 1 10 ISSN: 1935-7524 DOI: 10.1214/15-EJS1007 Discussion of High-dimensional autocovariance matrices and optimal linear prediction, Xiaohui Chen University
More informationSimulation studies of the standard and new algorithms show that a signicant improvement in tracking
An Extended Kalman Filter for Demodulation of Polynomial Phase Signals Peter J. Kootsookos y and Joanna M. Spanjaard z Sept. 11, 1997 Abstract This letter presents a new formulation of the extended Kalman
More informationON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA
ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA Holger Boche and Volker Pohl Technische Universität Berlin, Heinrich Hertz Chair for Mobile Communications Werner-von-Siemens
More informationComplete moment convergence of weighted sums for processes under asymptotically almost negatively associated assumptions
Proc. Indian Acad. Sci. Math. Sci.) Vol. 124, No. 2, May 214, pp. 267 279. c Indian Academy of Sciences Complete moment convergence of weighted sums for processes under asymptotically almost negatively
More informationA Primer on Asymptotics
A Primer on Asymptotics Eric Zivot Department of Economics University of Washington September 30, 2003 Revised: October 7, 2009 Introduction The two main concepts in asymptotic theory covered in these
More informationInference in VARs with Conditional Heteroskedasticity of Unknown Form
Inference in VARs with Conditional Heteroskedasticity of Unknown Form Ralf Brüggemann a Carsten Jentsch b Carsten Trenkler c University of Konstanz University of Mannheim University of Mannheim IAB Nuremberg
More informationis a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.
Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable
More informationChapter 3. Point Estimation. 3.1 Introduction
Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.
More informationConway s RATS Sequences in Base 3
3 47 6 3 Journal of Integer Sequences, Vol. 5 (0), Article.9. Conway s RATS Sequences in Base 3 Johann Thiel Department of Mathematical Sciences United States Military Academy West Point, NY 0996 USA johann.thiel@usma.edu
More informationA remark on the maximum eigenvalue for circulant matrices
IMS Collections High Dimensional Probability V: The Luminy Volume Vol 5 (009 79 84 c Institute of Mathematical Statistics, 009 DOI: 04/09-IMSCOLL5 A remark on the imum eigenvalue for circulant matrices
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationLarge Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n
Large Sample Theory In statistics, we are interested in the properties of particular random variables (or estimators ), which are functions of our data. In ymptotic analysis, we focus on describing the
More informationHan-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek
J. Korean Math. Soc. 41 (2004), No. 5, pp. 883 894 CONVERGENCE OF WEIGHTED SUMS FOR DEPENDENT RANDOM VARIABLES Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek Abstract. We discuss in this paper the strong
More informationClosest Moment Estimation under General Conditions
Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest
More informationCopyright 2010 Pearson Education, Inc. Publishing as Prentice Hall.
.1 Limits of Sequences. CHAPTER.1.0. a) True. If converges, then there is an M > 0 such that M. Choose by Archimedes an N N such that N > M/ε. Then n N implies /n M/n M/N < ε. b) False. = n does not converge,
More informationarxiv: v5 [math.na] 16 Nov 2017
RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem
More informationOPTIMAL DESIGNS FOR GENERALIZED LINEAR MODELS WITH MULTIPLE DESIGN VARIABLES
Statistica Sinica 21 (2011, 1415-1430 OPTIMAL DESIGNS FOR GENERALIZED LINEAR MODELS WITH MULTIPLE DESIGN VARIABLES Min Yang, Bin Zhang and Shuguang Huang University of Missouri, University of Alabama-Birmingham
More informationAn Empirical Characteristic Function Approach to Selecting a Transformation to Normality
Communications for Statistical Applications and Methods 014, Vol. 1, No. 3, 13 4 DOI: http://dx.doi.org/10.5351/csam.014.1.3.13 ISSN 87-7843 An Empirical Characteristic Function Approach to Selecting a
More informationMaximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments
Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya
More informationFixed points of the derivative and k-th power of solutions of complex linear differential equations in the unit disc
Electronic Journal of Qualitative Theory of Differential Equations 2009, No. 48, -9; http://www.math.u-szeged.hu/ejqtde/ Fixed points of the derivative and -th power of solutions of complex linear differential
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationSMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES
Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and
More informationPerformance Analysis of Coarray-Based MUSIC and the Cramér-Rao Bound
Performance Analysis of Coarray-Based MUSIC and the Cramér-Rao Bound Mianzhi Wang, Zhen Zhang, and Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in
More informationGaussian-Shaped Circularly-Symmetric 2D Filter Banks
Gaussian-Shaped Circularly-Symmetric D Filter Bans ADU MATEI Faculty of Electronics and Telecommunications Technical University of Iasi Bldv. Carol I no.11, Iasi 756 OMAIA Abstract: - In this paper we
More informationCHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD
CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD Vijay Singamasetty, Navneeth Nair, Srikrishna Bhashyam and Arun Pachai Kannu Department of Electrical Engineering Indian
More informationOn Some Mean Value Results for the Zeta-Function and a Divisor Problem
Filomat 3:8 (26), 235 2327 DOI.2298/FIL6835I Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat On Some Mean Value Results for the
More informationD I S C U S S I O N P A P E R
I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive
More informationAn Akaike Criterion based on Kullback Symmetric Divergence in the Presence of Incomplete-Data
An Akaike Criterion based on Kullback Symmetric Divergence Bezza Hafidi a and Abdallah Mkhadri a a University Cadi-Ayyad, Faculty of sciences Semlalia, Department of Mathematics, PB.2390 Marrakech, Moroco
More information