EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL

Size: px
Start display at page:

Download "EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL"

Transcription

1 EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 Abstract. Chirp signals play an important role in the statistical signal processing. Recently Kundu and Nandi [8] derived the asymptotic properties of the least squares estimators of the unknown parameters of the chirp signals model in presence of stationary noise. Unfortunately they did not discuss any estimation procedures. In this article we propose a computationally efficient algorithm for estimating different parameters of a chirp signal in presence of stationary noise. From proper initial guesses, the proposed algorithm produces efficient estimators in a fixed number of iterations. We suggest how to obtain the proper initial guesses also. The proposed estimators are consistent and asymptotically equivalent to least squares estimators of the corresponding parameters. We perform some simulation experiments to see the effectiveness of the proposed method, and it is observed that the proposed estimators perform very well. For illustrative purposes, we have performed the data analysis of a simulated data set. Finally, we propose some generalization in the conclusions. Key Words and Phrases: Chirp signals; least squares estimators; strong consistency, asymptotic distribution; linear processes; iteration. Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Pin 0806, India. Part of this work has been supported by a grant from the Center for Scientific and Industrial Research (CSIR), Government of India. 3 Part of this work has been supported by a grant from the Department of Science and Technology, Government of India. 4 Corresponding author. kundu@iitk.ac.in, Fax:

2 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3. Introduction In this paper we consider the following chirp signal model in presence of additive noise: () y(n) A 0 cos(α 0 n β 0 n ) B 0 sin(α 0 n β 0 n ) X(n); n,,n, where A 0, B 0 are non zero amplitudes, with restriction A 0 B0 M, for some constant M. The frequency and frequency rate, α 0, β 0, respectively, lie strictly between 0 and π. X(n) is a stationary noise sequence, and it has the following form; () X(n) a(j)ε(n j), a(j) <. j j Here, {ε(n)} is a sequence of independent and identically distributed (i.i.d.) random variables with zero mean, variance σ and with finite fourth moment. Given a sample of size N, the problem is to estimate the unknown amplitudes A 0, B 0, the frequency α 0 and the frequency rate β 0. A chirp signal occurs quite naturally in different areas of science and technology. It is a signal where frequency changes over time and this property of the signal has been exploited quite effectively, to measure the distance of an object from a source. This model can be found in different sonar, radar, communications problems, see for example Abatzoglou [], Kumaresan and Verma [6], Djuric and Kay [3], Gini et al. [5], Nandi and Kundu [9], Kundu and Nandi [8] and the references cited therein. In practice, the parameters like amplitudes, frequency, frequency rate are unknown, and one tries to find efficient estimators of these unknown parameters, having some desired statistical properties. Recently, Kundu and Nandi [8] derived the asymptotic properties of the least squares estimators (LSEs), and it is observed that the LSEs are consistent and asymptotically normally distributed. It is further observed that the LSEs are efficient, with the convergence rates of the amplitude, frequency and frequency rate are O p (N / ), O p (N 3/ ) and O p (N 5/ ) respectively. Here O p (N δ )

3 CHIRP 3 means N δ O p (N δ ) is bounded in probability. Note that a sequence of random variables {X n } is said to be bounded in probability, if for any ǫ > 0, there exists m,m, such that P(m < X n < M) ǫ. Unfortunately, Kundu and Nandi [8] did not discuss any estimation procedure of the LSEs. It is clear that they have to be obtained by some iterative procedure like Newton-Raphson method or its variant, see for example Tjalling [4]. Although, the initial guesses and convergence of the iterative procedure seem to be important issues. It is well known, see Rice and Rosenblatt [], that even in the simple sinusoidal model, finding the LSEs is not an trivial issue. The problem is due to the fact that the least squares surface has several local minima, and therefore, if the initial estimators are not properly chosen, any iterative procedure will converge to a local minimum rather than the global minimum. Several methods have been suggested in the literature to find the efficient frequency estimators, see for example a very recent article by Kundu et al. [7] in this respect. The main aim of this paper is to find estimators of the amplitudes, frequency and frequency rate efficiently, which have the same rates of convergence as the corresponding LSEs. It may be observed that the model () can be seen as a non-linear regression model, with A 0, B 0 as linear parameters, and α 0, β 0 as non-linear parameters. If we can find efficient estimators of the non-linear parameters α 0 and β 0, then efficient estimators of the linear parameters can be obtained by simple linear regression technique, see for example Richards []. Because of this reason, in this paper we mainly concentrate on estimating the non-linear parameters efficiently. We propose an iterative procedure which has been applied to find efficient estimators of the frequency and frequency rate. It is observed that if we start the initial guesses of α 0 and β 0 with convergence rates O p (N ) and O p (N ) ) respectively, then after four iterations the algorithm produces an estimate of α 0 with convergence rate O p (N 3/ ), and an estimate of β 0 with convergence rate O p (N 5/ ). Therefore, it is

4 4 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 clear that the proposed algorithm produces estimates which have the same rates of convergence as the LSEs. Moreover, it is known that the algorithm stops after finite number of iterations. We perform some simulation experiments, to see the effectiveness of the proposed method for different sample sizes and for different error variances. It is observed that the algorithm works very well. The mean squared errors (MSEs) of the proposed estimators are very close to the corresponding MSEs of the LSEs, and both are very close to the corresponding asymptotic variance of the LSEs. Therefore, the proposed method can be used very effectively instead of the LSEs. For illustrative purposes, we have analyzed one simulated data, and the performance is very satisfactory. The rest of the paper is organized as follows. We provide the properties of the LSEs in Section. In Section 3, we present the proposed algorithm and provide the theoretical justification of the algorithm. The simulation results and the analysis of a simulated data have been presented in Section 4. Conclusions appear in Section 5. All the proofs are presented in the Appendix.. Existing Results In this section we present briefly the properties of the LSEs for ready reference. The LSEs of the unknown parameters of the model () can be obtained by minimizing S(Θ) with respect to Θ (A,B,α,β), where ( S(Θ) y(n) A cos(αn βn ) B sin(αn βn ) ) n [ [ T [ [ A A Y W(α,β) Y W(α,β). B]] B]]

5 CHIRP 5 Here Y (y(),,y(n)) T, is the N data vector and W(θ) is the N matrix of the following form; (3) W(θ) cos(α β) sin(α β) cos(α 4β) sin(α 4β).. cos(nα N β) sin(nα N β) Note that, if α and β are known the LSEs of A 0 and B 0 can be obtained as Â(θ) and B(θ) respectively, where θ (α,β),. (4) (Â(θ), B(θ) ) T ( W T (θ)w(θ) ) W T (θ)y, Therefore, the LSEs of α 0 and β 0 can be obtained first by minimizing Q(α,β) with respect to α and β, where (5) Q(α,β) S(Â(θ), B(θ),α,β) Y T W(θ) ( W T (θ)w(θ) ) W T (θ)y. Once the LSEs of α 0 and β 0, say α and β are obtained the LSEs of A 0 and B 0 can be easily obtained as Â( α, β) and B( α, β) respectively, see for example Richards []. Kundu and Nandi [8] derived the properties of the LSEs and it is as follows. The LSEs of the unknown parameters of the model () are consistent to the corresponding parameters and they have the following asymptotic distribution ( N / (Â A 0),N / ( B ) B 0 ),N 3/ ( α α 0 ),N 5/ d ( β β 0 ) N 4 (0, 4cσ Σ), where c j a(j). The symbol means convergence in distribution, N 4 (0, 4cσ Σ) means a 4-variate normal distribution with mean vector 0, the dispersion matrix 4cσ Σ and (A 0 9B0) 4A 0 B 0 8B 0 5B 0 (6) Σ 4A 0 B 0 A 0 B0 (9A 0 B0) 8A 0 5A 0 8B 0 8A B 0 5A In the next section we provide the algorithm which produces consistent estimators of α 0 and β 0 which have the same rates of convergence as the LSEs. d

6 6 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 3. Proposed Algorithm The aim of this algorithm is to find the estimates of the frequency and frequency rate with the same rates of convergence as the LSEs. We will first deduce a method for improving the speed of convergence of general consistent estimators of α 0 and β 0 : this method is the crucial tool for implementing the main algorithm devloped in this paper. If α is an estimator of α 0, such that α α 0 O p (N ( δ ) ), for some δ > 0, and β is an estimator of β 0, such that β β 0 O p (N ( δ ) ), for some δ > 0, then an improved estimator of α 0 can be obtained as (7) α α 48 ( ) P α N Im N with (8) P α N n ( y(n) n N ) e i( αn βn ) Q N and Q N y(n)e i( αn βn ). Here Im( ) denotes the imaginary part of the complex argument. Similarly, an improved estimator of β 0 can be obtained as ( ) (9) β β 45 P β N 4Im N, Q N with N (0) P β N ) y(n) (n N e i( αn βn ) 3 n and Q N is same as defined in (8). n The following two theorems provide the justification for the improved estimators, whose proofs will be provided in the appendix. Theorem : If α α 0 O p (N ( δ ) ) for δ > 0 then ( (a) α α 0 O ) p N ( δ ) if δ 4 (b) N 3 d ( α α 0 ) N(0,σ) if δ > 4, where σ 384cσ, the asymptotic variance of the LSE of α A 0 B0 0.

7 Theorem : If β β 0 O p ( N ( δ ) ) for δ > 0 then CHIRP 7 ( (a) β β0 O ) p N ( δ ) if δ 4 (b) N 5 d ( β β0 ) N(0,σ) if δ > 4 where σ 360cσ, the asymptotic variance of the LSE of β A 0 B0 0. Theorem 3: If α α 0 O p (N ( δ) ) and β ( β 0 O ) p N ( δ ), then for δ > 4 and for δ > 4, Cov(N 3 ( α α 0 ),N 5 ( β β0 )) 360cσ. A 0 B0 Now we will show that starting from initial guesses α, β, with convergence rates α α 0 O p (N ) and β β 0 O p (N ) respectively, how the above procedure can be used to obtain efficient estimators. It may be noted that finding initial guesses with the above convergence rates are not very difficult. From Rice and Rosenblatt [] it follows that if one can show, α, β, the minimizer of Q(α,β) are such that, N( α α 0 ) 0 a.s.,n ( β β 0 ) 0 a.s., then one has the existence of such an initial ( πj guess when searched over the grids N, πk ), j,,n, and k,,n. Let N Θ 0 (A 0,B 0,α 0,β 0 ) be the true parameter value and D is a diagonal matrix, such ( ) that D diag,, N N N N, N N,. Note that, see Kundu and Nandi [8], () S (Θ 0 )D ( Θ Θ 0 )D [DS ( Θ)D], where S ( ) and S ( ) are the derivative vector and the Hessian matrix of S( ), respectively, and Θ is a point on line joining Θ and Θ 0. It has been shown in Kundu and Nandi [8] that () DS ( Θ)D lim N DS (Θ 0 )D a.s., which is a positive definite matrix. Then () gives (3) ( Θ Θ 0 )( [ ND) [DS S (Θ 0 )D] ( Θ)D ]. N Also S (Θ 0 )D N 0 a.s. implies ( Θ Θ 0 )( ND) 0 a.s. which means N( α α 0 ) 0 a.s.,n ( β β 0 ) 0 a.s.

8 8 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 The main idea of the proposed algorithm is not to use the whole sample size at the beginning, as it was first suggested by Bai et al. []. We will use part of the sample at the beginning and gradually proceed towards the complete sample. The algorithm can be described as follows. We will denote the estimates of α 0 and β 0 obtained at the i-th iteration as α (i) and β (i) respectively. Algorithm: Step : Choose N N 8/9. Therefore, α (0) α 0 O p (N ) O p (N /8 ), and β (0) β 0 O p (N ) O p (N /4 ). Perform step (7) and (9). Therefore, after the -st iteration we have α () α 0 O p (N /4 ) O p (N 0/9 ), and β () β 0 O p (N ) O p (N 0/9 ). Step : Choose N N 80/8. Therefore, α () α 0 O p (N /8 ), and β() β 0 O p (N /4 ). Perform step (7) and (9). Therefore, after the -nd iteration we have α () α 0 O p (N /4 ) O p (N 00/8 ), and β () β 0 O p (N / ) O p (N 00/8 ). Step 3: Choose N 3 N. Therefore, α () α 0 O p (N 9/8 3 ), and β() β 0 O p (N 38/8 3 ). Perform step (7) and (9). Therefore, after the 3-rd iteration we have α (3) α 0 O p (N 38/8 ), and β (3) β 0 O p (N 76/8 ). Step 4: Choose N 4 N, and perform step (7) and (9). Now we obtain the required convergence rates, i.e. α (4) α 0 O p (N 3/ ), and β (4) β 0 O p (N 5/ ). 4. Simulation and data analysis 4.. Simulation Results: In this section we present some simulation results for different sample sizes and for different error variances to show how the proposed

9 CHIRP 9 Table. Result for model with i.i.d. error sample size σ 0.05 σ 0.5 N50 PARA ASYV ( E-04) ( E-08) ( E-03) ( E-07) MEAN (Algo) MSE (Algo) ( E-03) ( E-06) ( E-03) ( E-06) MEAN (LSE) MSE (LSE) ( E-04) ( E-08) ( 0.866E-03) ( E-07) N00 PARA ASYV ( E-05) ( E-09) ( E-04) ( E-08) MEAN (Algo) MSE (Algo) ( E-03) ( E-08) ( E-03) ( E-08) MEAN (LSE) MSE (LSE) ( E-05) ( E-09) ( E-04) ( 0.560E-08) N00 PARA ASYV ( E-06) ( E-) ( E-05) ( E-0) MEAN (Algo) MSE (Algo) ( E-05) ( E-09) ( E-04) ( E-09) MEAN (LSE) MSE (LSE) ( E-05) ( E-0) ( E-04) ( E-09) method behaves in practice. We consider the following model (4) y(n).0 cos(.75n.05n ).0 sin(.75n.05n ) X(n). We have considered (i) X(n) ε(n) (i.i.d. errors) and (ii) X(n) ε(n) 0.5ε(n ) (stationary errors), where ε(n) s are i.i.d. normal random variables with mean 0 and variance σ. We have taken N 50, 00 and 00, σ 0.05 and 0.5. In each case we have obtained the initial guesses as has been suggested in Section 3. For each N and σ, we compute the average estimates of α 0 and β 0, and the associate MSEs based on 000 replications. The results are reported in Table and Table. For comparison purposes we have also computed the LSEs and the corresponding asymptotic variances. From the results presented in Tables and, it is clear that the performances of the estimators obtained by the proposed algorithm are quite satisfactory in comparison to the corresponding performances of the LSEs. The average MSEs of the proposed estimators are very close to the asymptotic variance of the corresponding LSEs. The performances are quite good even with moderate sample sizes. It is clear that even with the four steps of iterations, we are able to achieve the same accuracy of the LSEs.

10 0 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 Table. Result for model with Stationary error sample size σ 0.05 σ 0.5 N50 PARA ASYV ( E-04) ( E-08) ( E-03) ( E-07) MEAN (Algo) MSE (Algo) ( E-03) ( E-07) ( E-03) ( E-06) MEAN (LSE) MSE (LSE) ( E-04) ( E-08) ( E-03) ( E-07) N00 PARA ASYV ( E-05) ( E-09) ( E-04) ( E-08) MEAN (Algo) MSE (Algo) ( E-03) ( E-08) ( E-03) ( E-08) MEAN (LSE) MSE (LSE) ( E-05) ( E-09) ( E-04) ( E-08) N00 PARA ASYV ( E-06) ( E-) ( E-05) ( E-0) MEAN (Algo) MSE (Algo) ( E-05) ( E-09) ( E-04) ( E-09) MEAN (LSE) MSE (LSE) ( E-05) ( E-0) ( E-04) ( E-09) 4.. Data Analysis. In this subsection we present the analysis of a simulated data set mainly for illustrative purposes. We have generated a sample of size 00 from the following chirp model (5) y(n).5 cos(.0n.0n ).5 sin(.0n.0n ) X(n) here X(n) is a sequence of i.i.d. Gaussian random variables with mean zero and variance 0.5. It is presented in Figure. We also present the least squares surface Q(θ) in Figure as defined in (5). Since Q(θ) has only one trough, the initial guesses can be obtained as suggested in Section 3. Using the algorithm, we obtained the estimates of A 0, B 0, α 0, β 0 as ,.79, and respectively. The associated 95% confidence intervals are obtained using bootstrap approach, and they are (0.037,.840), (.369,.3475), (0.9998,.000) and (0.9999,.000) respectively. The residuals are reported in Figure 3. From the run test it follows that the residuals are random, as expected. 5. Conclusion In this paper we propose an efficient algorithm to estimate the nonlinear parameters of one dimensional chirp signal in presence of stationary noise. The main advantage

11 CHIRP Figure. Simulated data from the Chirp Model (5) of size Figure. Least squares surface plot of the proposed algorithm is that starting from a reasonable starting values for both the frequency and frequency rate, in four steps only, it produces estimates which are asymptotically equivalent to the LSEs. We have also proposed how to obtain the reasonable initial guesses from the least squares surfaces. We have performed an extensive simulation experiments and it is observed that the proposed algorithm is working very well, and it is producing estimates which are quite close to the LSEs. Although, in this paper we have considered the estimation procedure of the ordinary chirp model in presence of stationary noise, the method may be extended for

12 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA, Figure 3. Residuals plot generalized chirp signal model also as it was proposed by Djuric and Kay [3]. The work is in progress, it will be reported later. Acknowledgement The authors would like to thank the associate editor and a referee for their vauable comments. Appendix We need the following lemma for proving the theorems. Using Vinogradov s [5] result one can prove it. Lemma. If (θ,θ ) in (0,π) (0,π), t 0,, then except for countable number of points the followings are true. (i) (6) lim N N n cos(θ n θ n ) lim N N sin(θ n θ n ) 0. n

13 (ii) (7) lim N N t (8) lim N N t (9) lim N N t CHIRP 3 n t cos (θ n θ n ) n n t sin (θ n θ n ) n (t ) (t ). n t sin(θ n θ n ) cos(θ n θ n ) 0. n N h (0) lim n t cos(θ N N t nθ n ) cos(θ (nh)θ (nh) ) n { 0 for h 0 and for, (t) h0 Proof of Theorem : () and () Note that, Q N P α N y(t)e i( αt βt ) ( A 0 ib 0 )e i[(α 0 α)t(β 0 β)t ] X(t)e i( αt βt ) y(t)(t N )e i( αt βt ) X(t)(t N )e i( αt βt ) ( A 0 ib 0 ( A 0 ib 0 ( A 0 ib 0 )(t N )e i[(α 0 α)t(β 0 β)t ] To get nd term in equation () we use the lemma and get e i[(α 0 α)t(β 0 β)t ] o p (N) )e i[(α 0 α)t(β 0 β)t ] )(t N )ei[(α 0 α)t(β 0 β)t ] Using lemma and bivariate Taylor series expansion the st term in equation () becomes

14 4 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 e i[(α 0 α)t(β 0 β)t ] [ e i[t(α 0 α 0 )t (β 0 β 0 i(α 0 α)t i(β 0 β)t i (α 0 α) t! e i[t(α 0 α )t (β 0 β i (β 0 β) t 4 i (α 0 α)(β 0 β)t 3 e i[t(α 0 α )t (β 0 β! ]! e i[t(α 0 α )t (β 0 β O(N) O p (N δ )O(N ) O p (N δ )O(N 3 ) O p(n δ )O p (N 3 ) O p (N δ )O p (N δ )O p (N 4 ) O p(n 4 δ )O p (N 5 ) O p (N) O p (N δ ) O p (N δ ) O p(n δ ) O p (N δ δ ) O p(n δ ) O p (N) O p (N min(δ,δ ) ) O p (N min(δ,δ ) ) where (α,β ) is a point on the line joining (α 0,β 0 ) and ( α, β) To calculate the 3rd term in equation () we need the following two observations. (i) N n t n [sin(α 0 t β 0 t X(t) & N n t n [cos(α 0 t β 0 t X(t) satisfy conditions for Central Limit Theorem (CLT), see Fuller [4], when α 0,β 0 are the true value as in the model. (ii) sup t n X(t)e i[αtβt ] 0 a.s.. It can be proved along the same line as α,β N n Kundu and Nandi [8]. Now we choose L large such that L min(δ,δ ) < 0. Then using again bivariate Taylor series expansion the 3rd term in equation () becomes [ X(t)e i( αt βt ) X(t) e i(tα 0t β 0 ) L l L l ( i( α α 0 )t) l e i(tα 0t β 0 ) l! L l l (i( α α 0 )t) k (i( β β 0 )t ) l k k k!(l k)! L (i( α α 0 )t) k (i( β β 0 )t ) L k k0 k!(l k)! ( i( β β 0 )t ) l l! e i(tα 0t β 0 ) e i(tα t β ) ] e i(tα 0t β 0 )

15 L O p (N ) L l L k0 k l l! O p(n l lδ )O p (N l CHIRP 5 L ) l l! O p(n l lδ )O p (N l l k!(l k)! O p(n k kδ )O p (N (l k) (l k)δ )O p (N k(l k) k!(l k)! O p(n k kδ )O p (N (L k) (L k)δ )O p (N k(l k) ) L O p (N ) l! [O L p(n lδ ) O p (N lδ L k0 l L O p (N ) k!(l k)! [O p(n kδ (L k)δ l l l k0 l k!(l k)! [O p(n kδ (l k)δ l k L k0 ) ) k!(l k)! [O p(n kδ (l k)δ L O p (N ) O p (N ) l! (N δ N δ ) l O p (N) L! (N δ N δ ) L L O p (N ) l! O p(n l min(δ,δ ) ) L! O p(n Lmin(δ,δ ) ) l L O p (N ) Op (N ) l! O p(n l min(δ,δ ) ) O p () O p (N ) l Then equation () becomes Q N ( A 0 ib 0 O p (N)( A 0 ib 0 ) k!(l k)! [O p(n kδ (L k)δ )[O p (N) O p (N δ ) O p (N δ ( A 0 ib 0 )o p (N) O p (N ) Bivariate Taylor series gives st term in equation () as X(t)(t N )e i( αt βt ) L l L l ( i( α α 0 )t) l e i(tα 0t β 0 ) l! X(t)(t N ) [ e i(tα 0t β 0 ) L l l (i( α α 0 )t) k (i( β β 0 )t ) l k k k!(l k)! L (i( α α 0 )t) k (i( β β 0 )t ) L k k0 k!(l k)! ( i( β β 0 )t ) l l! e i(tα 0t β 0 ) e i(tα t β ) ] e i(tα 0t β 0 )

16 6 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 X(t)(t N )e i(tα 0t β 0 ) L l L l L k0 l! O p(n l lδ )O p (N l k L 3 ) l l! O p(n l lδ )O p (N l l k!(l k)! O p(n k kδ )O p (N (l k) (l k)δ )O p (N k(l k) k!(l k)! O p(n k kδ )O p (N (L k) (L k)δ )O p (N k(l k) ) X(t)(t N )e i(tα 0t β 0 ) L l l k L l k!(l k)! [O p(n 3 kδ (l k)δ X(t)(t N )e i(tα 0t β 0 ) L k0 L l 3 ) l! [O p(n 3 lδ ) O p (N 3 lδ l k0 k!(l k)! [O p(n kδ (L k)δ l L k0 3 ) k!(l k)! [O p(n kδ (L k)δ k!(l k)! [O p(n 3 kδ (l k)δ X(t)(t N L )e i(tα 0t β 0 ) O p (N 3 ) l! (N δ N δ ) l O p (N ) L! (N δ N δ ) L X(t)(t N L )e i(tα 0t β 0 ) l! O p(n 3 l min(δ,δ ) ) L! O p(n Lmin(δ,δ ) ) l X(t)(t N L )e i(tα 0t β 0 ) O p (N 3 ) l! O p(n l min(δ,δ ) ) O p (N) X(t)(t N )e i(tα 0t β 0 ) o p (N 3 ) Again using Taylor series we get nd term in equation () as (t N )ei[(α 0 α)t(β 0 β)t ] (t N [ ) e i[t(α 0 α 0 )t (β 0 β 0 i(α 0 α)t i(β 0 β)t i (α 0 α) t! l e i[t(α 0 α )t (β 0 β i (β 0 β) t 4 i (α 0 α)(β 0 β)t 3 e i[t(α 0 α )t (β 0 β! ]! e i[t(α 0 α )t (β 0 β

17 CHIRP 7 O(N) i(α 0 α)o(n 3 ) i(α 0 α)o p (N δ )O p (N 4 ) i(α 0 α)(β 0 β)o(n 5 ) i(β 0 β)o(n 4 ) i(β 0 β)o p (N δ )O p (N 6 ) O p (N) i(α 0 α)[o p (N 3 ) O p (N 3 δ ) O p (N 3 δ O p (N δ ) O p (N δ ) o p (N ) i(α 0 α)[o p (N 3 ) O p (N 3 δ ) O p (N 3 δ For 3rd term in equation () we calculate (t N )e i[(α 0 α)t(β 0 β)t ] Then equation () becomes (t N )O p() O p (N) P α N ( A 0 ib 0 )i(α 0 α)[o p (N 3 ) O p (N 3 δ ) O p (N 3 δ ( A 0 ib 0 )O p (N) N o p (N ) o p (N 3 ) X(t)(t N )e i(α 0tβ 0 t ) X(t)(t N )e i(α 0tβ 0 t ) ( A 0 ib 0 )i(α 0 α)[o p (N 3 ) O p (N 3 δ α α 48 P α N Im[ N ] Q N α α 48 N Im ( A 0 ib 0 )i(α 0 α)[o p (N 3 ) O p (N 3 δ O p (N)( A 0 ib 0 ) X(t)(t N )e i(α 0tβ 0 t ) α α (α 0 α) (α 0 α)o p (N δ ) 48 X(t)(t N )e i(α 0tβ 0 t ) N Im O p (N)( A 0 ib 0 ) Now using Lemma the variance of Z 48 X(t)(t N )e i(α 0tβ 0 t ) Im N O p (N)( A 0 ib 0 )

18 8 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 is asymptotically same as asymptotic variance of LSE of α 0. So α α 0 O p (N ( δ ) ) where δ (0, ) then α α 0 O p (N ( δ ) ) if δ 4 N 3 ( α α 0 ) N(0,σ ) if δ > by CLT of stochastic process in Fuller [4] where 4 σ 384σ c A 0 B 0 variance of LSE of α 0 where c a(j). j is asymptotic Proof of Theorem : (3) N P β N y(t)(t N )e i( αt βt ) 3 X(t)(t N 3 )e i( αt βt) ( A 0 ib 0 ( A 0 ib 0 )(t N 3 )e i[(α 0 α)t(β 0 β)t ] Taylor series expansion gives nd term in equation (3) as (t N 3 )ei[(α 0 α)t(β 0 β)t ] [ (t N 3 ) e i[t(α 0 α 0 )t (β 0 β 0 i(α 0 α)t i(β 0 β)t i (α 0 α) t! e i[t(α 0 α )t (β 0 β i (β 0 β) t 4 i (α 0 α)(β 0 β)t 3 e i[t(α 0 α )t (β 0 β! ]! )(t N 3 )ei[(α 0 α)t(β 0 β)t ] e i[t(α 0 α )t (β 0 β O(N ) i(β 0 β)o(n 5 ) i(β 0 β)o p (N δ )O p (N 7 ) i(α 0 α)(β 0 β)o(n 6 ) i(α 0 α)o(n 4 ) i(α 0 α)o p (N δ )O p (N 5 ) O p (N ) i(β 0 β)[o p (N 5 ) O p (N 5 δ ) O p (N 5 δ O p (N 3 δ ) O p (N 3 δ ) o p (N 3 ) i(β 0 β)[o p (N 5 ) O p (N 5 δ ) O p (N 5 δ

19 CHIRP 9 Expanding using bivariate Taylor series st term in equation (3) becomes X(t)(t N 3 )e i( αt βt) L l L l ( i( α α 0 )t) l e i(tα 0t β 0 ) l! X(t)(t N 3 ) [ e i(tα 0t β 0 ) L l l (i( α α 0 )t) k (i( β β 0 )t ) l k k k!(l k)! L (i( α α 0 )t) k (i( β β 0 )t ) L k k0 k!(l k)! X(t)(t N 3 )e i(tα 0t β 0 ) L l L l L k0 l! O p(n l lδ )O p (N l k L 5 ) l ( i( β β 0 )t ) l l! e i(tα 0t β 0 ) e i(tα t β ) ] l! O p(n l lδ )O p (N l e i(tα 0t β 0 ) l k!(l k)! O p(n k kδ )O p (N (l k) (l k)δ )O p (N k(l k) k!(l k)! O p(n k kδ )O p (N (L k) (L k)δ )O p (N k(l k)3 ) X(t)(t N L 3 )e i(tα 0t β 0 ) l! [O p(n 5 lδ ) O p (N 5 lδ L l l k l k!(l k)! [O p(n 5 kδ (l k)δ X(t)(t N L 3 )e i(tα 0t β 0 ) L k0 l l k0 k!(l k)! [O p(n 3 kδ (L k)δ l L k0 5 ) 5 ) k!(l k)! [O p(n 3 kδ (L k)δ k!(l k)! [O p(n 5 kδ (l k)δ X(t)(t N L 3 )e i(tα 0t β 0 ) O p (N 5 ) l! (N δ N δ ) l O p (N ) L! (N δ N δ ) L X(t)(t N L 3 )e i(tα 0t β 0 ) l! O p(n 5 l min(δ,δ ) ) L! O p(n Lmin(δ,δ ) ) l X(t)(t N L 3 )e i(tα 0t β 0 ) O p (N 5 ) l! O p(n l min(δ,δ ) ) O p (N) X(t)(t N 3 )e i(tα 0t β 0 ) o p (N 5 ) l

20 0 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 To get 3rd term in equation (3) we calculate (t N 3 )e i[(α 0 α)t(β 0 β)t] Then equation (3) becomes P β N (A 0 ib 0 (t N 3 )O p() O p (N ) )i(β 0 β)[o p (N 5 ) O p (N 5 δ ) O p (N 5 δ ( A 0 ib 0 )O p (N ) o p (N 3 ) o p (N 5 ) N X(t)(t N 3 )e i(α 0tβ 0 t ) X(t)(t N 3 )e i(α 0tβ 0 t) ( A 0 ib 0 )i(β 0 β)[o p (N 5 ) O p (N 5 δ β β 45 P β N 4Im[ N ] Q N β β ( A 0 ib 0 )i(β 0 β)[o p (N 5 ) O p (N 5 δ X(t)(t N 45 3 )e i(α 0tβ 0 t ) N 4Im O p (N)( A 0 ib 0 ) β β (β 0 β) (β 0 β)o p (N δ ) 45 X(t)(t N N 4Im O p (N)( A 0 ib 0 ) 3 )e i(α 0tβ 0 t ) Now using Lemma variance of Z 45 X(t)(t N Im N 3 O p (N)( A 0 ib 0 ) 3 )e i(α 0tβ 0 t ) is asymptotically same as asymptotic variance of LSE of β 0.. So β β0 O p (N ( δ ) ) where δ (0, ) then β β0 O p (N ( δ ) ) if δ 4 N 5 ( β β0 ) N(0,σ ) if δ > 4 by CLT of stochastic process in Fuller [4] where σ 360σ c A 0 B 0 variance of the LSE of β 0. is the asymptotic

21 CHIRP Proof of Theorem 3: For covariance of Z,Z it is enough to find the quantities g k or g k g k lim N and lim N g k lim N lim N which are given below. N k N 4 n N k N 5 n N k N 4 n N k N 5 n (A 0 B 0 ) (n N 3 )(n k N ) (B 0 cos(α 0 n β 0 n ) A 0 sin(α 0 n β 0 n )) (B 0 cos(α 0 (n k) β 0 (n k) ) A 0 sin(α 0 (n k) β 0 (n k) )) 90 (A 0 B 0 ) (B0 cos(α 0 n β 0 n ) A 0 sin(α 0 n β 0 n )) (B 0 cos(α 0 (n k) β 0 (n k) ) A 0 sin(α 0 (n k) β 0 (n k) )) [ n 3 n ( N k) n( N 3 ) (N3 6 N k (A 0 B 0 ) (n N )((n k) N 3 ) (B 0 cos(α 0 n β 0 n ) A 0 sin(α 0 n β 0 n )) (B 0 cos(α 0 (n k) β 0 (n k) ) A 0 sin(α 0 (n k) β 0 (n k) )) 90 (A 0 B 0 ) (B0 cos(α 0 n β 0 n ) A 0 sin(α 0 n β 0 n )) (B 0 cos(α 0 (n k) β 0 (n k) ) A 0 sin(α 0 (n k) β 0 (n k) )) [ n 3 n ( N k) n(k Nk N 3 ) (N3 6 Nk 360cσ and both are for k 0 using (0) and 0 otherwise. So, the asymptotic A 0 B 0 covariance of Z,Z has the same order as the asymptotic covariance of the LSEs, see Kundu and Nandi [8]. References [] Abatzoglou, T. (986), Fast maximum likelihood joint estimation of frequency and frequency rate, IEEE Transactions on Aerospace and Electronic Systems, vol., [] Bai, Z.D., Rao, C.R., Chow, M., Kundu, D. (003), Efficient algorithm for estimating the parameters of superimposed complex exponential model, Journal of Statistical Planning and Inference, vol. 0, [3] Djuric, P.M. and Kay, S.M. (990), Parameter estimation of chirp signals, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, 8-6. [4] Fuller, W. (996), Introduction to statistical time series, -nd edition, John Wiley and Sons, New York. [5] Gini, F., Montanari, M. and Verrazzani, L. (000), Estimation of chirp signals in compound Gaussian clutter: A cyclostationary approach, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 48,

22 ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 [6] Kumaresan, R. and Verma, S. (987), On estimating the parameters of chirp signals using rank reduction techniques, Proceedings of st Asilomar Conference, , Pacific Grove, California. [7] Kundu, D., Bai, Z.D., Nandi, S., Bai, L. (0), Super efficient frequency estimation, Journal of the Statistical Planning and Inference, vol. 4, [8] Kundu, D. and Nandi, S. (008), Parameter estimation of chirp signals in presence of stationary noise, Statistica Sinica, vol. 8, [9] Nandi, S. and Kundu, D. (004), Asymptotic properties of the least squares estimators of the parameters of the chirp signals, Annals of the Institute of Statistical Mathematics, vol. 56, [0] Prasad, A., Nandi, S., Kundu, D. (00), An efficient and fast algorithm for estimating the parameters of sinusoidal signals, Sankhya, vol. 55,no., [] Rice, J.A. and Rosenblatt, M. (988), On frequency estimation, Biometrika, vol. 75, [] Richards, F.S.G. (96), A method of maximum likelihood estimation, Journal of the Royal Statistical Society, Ser. B, vol. 3, [3] Saha, S. and Kay, S. (00), Maximum likelihood parameter estimation of superimposed chirps using Monte Carlo importance sampling, IEEE Transactions on Signal Processing, vol. 50, [4] Tjalling, J.Y. (995), Historical development of the Newton-Raphson method, SIAM Review, vol. 37, [5] Vinogradov, I.M. (954), The method of trigonometrical sums in the theory of numbers, Interscience, Translated from Russian. Revised and annotated by K. F. Roth and Anne Davenport. Reprint of the 954 translation. Dover Publications, Inc., Mineola, NY, 004. [6] Weyl, H. (96), Ueber die Gleichverteilung von Zahlen mod. Eins, Math. Ann., vol. 77,

On Least Absolute Deviation Estimators For One Dimensional Chirp Model

On Least Absolute Deviation Estimators For One Dimensional Chirp Model On Least Absolute Deviation Estimators For One Dimensional Chirp Model Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract It is well known that the least absolute deviation (LAD) estimators are more

More information

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Statistica Sinica 8(008), 87-0 PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Debasis Kundu and Swagata Nandi Indian Institute of Technology, Kanpur and Indian Statistical Institute

More information

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE DEBASIS KUNDU AND SWAGATA NANDI Abstract. The problem of parameter estimation of the chirp signals in presence of stationary noise

More information

On Parameter Estimation of Two Dimensional Chirp Signal

On Parameter Estimation of Two Dimensional Chirp Signal On Parameter Estimation of Two Dimensional Chirp Signal Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract Two dimensional (-D) chirp signals occur in different areas of image processing. In this paper,

More information

An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals

An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals isid/ms/8/ November 6, 8 http://www.isid.ac.in/ statmath/eprints An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals Swagata Nandi Anurag Prasad Debasis

More information

On Two Different Signal Processing Models

On Two Different Signal Processing Models On Two Different Signal Processing Models Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 15, 2015 Outline First Model 1 First Model 2 3 4 5 6 Outline First Model 1

More information

An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals

An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals Swagata Nandi 1 Debasis Kundu Abstract A computationally efficient algorithm is proposed for estimating the parameters

More information

Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal

Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Rhythm Grover, Debasis Kundu,, and Amit Mitra Department of Mathematics, Indian Institute of Technology Kanpur,

More information

ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL

ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL 1 ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL SWAGATA NANDI 1 AND DEBASIS KUNDU Abstract. In this paper, we propose a modification of the multiple sinusoidal model such that periodic

More information

Amplitude Modulated Model For Analyzing Non Stationary Speech Signals

Amplitude Modulated Model For Analyzing Non Stationary Speech Signals Amplitude Modulated Model For Analyzing on Stationary Speech Signals Swagata andi, Debasis Kundu and Srikanth K. Iyer Institut für Angewandte Mathematik Ruprecht-Karls-Universität Heidelberg Im euenheimer

More information

LIST OF PUBLICATIONS

LIST OF PUBLICATIONS LIST OF PUBLICATIONS Papers in referred journals [1] Estimating the ratio of smaller and larger of two uniform scale parameters, Amit Mitra, Debasis Kundu, I.D. Dhariyal and N.Misra, Journal of Statistical

More information

ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS

ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS Debasis Kundu Department of Mathematics Indian Institute of Technology Kanpur Kanpur, Pin 20806 India Abstract:

More information

Estimating Periodic Signals

Estimating Periodic Signals Department of Mathematics & Statistics Indian Institute of Technology Kanpur Most of this talk has been taken from the book Statistical Signal Processing, by D. Kundu and S. Nandi. August 26, 2012 Outline

More information

Noise Space Decomposition Method for two dimensional sinusoidal model

Noise Space Decomposition Method for two dimensional sinusoidal model Noise Space Decomposition Method for two dimensional sinusoidal model Swagata Nandi & Debasis Kundu & Rajesh Kumar Srivastava Abstract The estimation of the parameters of the two dimensional sinusoidal

More information

A New Two Sample Type-II Progressive Censoring Scheme

A New Two Sample Type-II Progressive Censoring Scheme A New Two Sample Type-II Progressive Censoring Scheme arxiv:609.05805v [stat.me] 9 Sep 206 Shuvashree Mondal, Debasis Kundu Abstract Progressive censoring scheme has received considerable attention in

More information

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Analysis of Middle Censored Data with Exponential Lifetime Distributions

Analysis of Middle Censored Data with Exponential Lifetime Distributions Analysis of Middle Censored Data with Exponential Lifetime Distributions Srikanth K. Iyer S. Rao Jammalamadaka Debasis Kundu Abstract Recently Jammalamadaka and Mangalam (2003) introduced a general censoring

More information

A Convenient Way of Generating Gamma Random Variables Using Generalized Exponential Distribution

A Convenient Way of Generating Gamma Random Variables Using Generalized Exponential Distribution A Convenient Way of Generating Gamma Random Variables Using Generalized Exponential Distribution Debasis Kundu & Rameshwar D. Gupta 2 Abstract In this paper we propose a very convenient way to generate

More information

Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions

Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Arabin Kumar Dey & Debasis Kundu Abstract Recently Kundu and Gupta ( Bivariate generalized exponential distribution,

More information

Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes

Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes A. Ganguly 1, D. Kundu 2,3, S. Mitra 2 Abstract Step-stress model is becoming quite popular in recent times for analyzing lifetime

More information

Analysis of Type-II Progressively Hybrid Censored Data

Analysis of Type-II Progressively Hybrid Censored Data Analysis of Type-II Progressively Hybrid Censored Data Debasis Kundu & Avijit Joarder Abstract The mixture of Type-I and Type-II censoring schemes, called the hybrid censoring scheme is quite common in

More information

Asymptotic properties of the least squares estimators of a two dimensional model

Asymptotic properties of the least squares estimators of a two dimensional model Metrika (998) 48: 83±97 > Springer-Verlag 998 Asymptotic properties of the least squares estimators of a two dimensional model Debasis Kundu,*, Rameshwar D. Gupta,** Department of Mathematics, Indian Institute

More information

Estimation of parametric functions in Downton s bivariate exponential distribution

Estimation of parametric functions in Downton s bivariate exponential distribution Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr

More information

On Sarhan-Balakrishnan Bivariate Distribution

On Sarhan-Balakrishnan Bivariate Distribution J. Stat. Appl. Pro. 1, No. 3, 163-17 (212) 163 Journal of Statistics Applications & Probability An International Journal c 212 NSP On Sarhan-Balakrishnan Bivariate Distribution D. Kundu 1, A. Sarhan 2

More information

On Weighted Exponential Distribution and its Length Biased Version

On Weighted Exponential Distribution and its Length Biased Version On Weighted Exponential Distribution and its Length Biased Version Suchismita Das 1 and Debasis Kundu 2 Abstract In this paper we consider the weighted exponential distribution proposed by Gupta and Kundu

More information

Approximation of Average Run Length of Moving Sum Algorithms Using Multivariate Probabilities

Approximation of Average Run Length of Moving Sum Algorithms Using Multivariate Probabilities Syracuse University SURFACE Electrical Engineering and Computer Science College of Engineering and Computer Science 3-1-2010 Approximation of Average Run Length of Moving Sum Algorithms Using Multivariate

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

On block bootstrapping areal data Introduction

On block bootstrapping areal data Introduction On block bootstrapping areal data Nicholas Nagle Department of Geography University of Colorado UCB 260 Boulder, CO 80309-0260 Telephone: 303-492-4794 Email: nicholas.nagle@colorado.edu Introduction Inference

More information

ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF THE PARAMETERS OF THE CHIRP SIGNALS

ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF THE PARAMETERS OF THE CHIRP SIGNALS Ann. Inst. Statist. Math. Vol. 56, o. 3, 529 544 (2004) (~)2004 The Institute of Statistical Mathematics ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF THE PARAMETERS OF THE CHIRP SIGALS SWAGATA

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models

The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models Herman J. Bierens Pennsylvania State University September 16, 2005 1. The uniform weak

More information

A note on profile likelihood for exponential tilt mixture models

A note on profile likelihood for exponential tilt mixture models Biometrika (2009), 96, 1,pp. 229 236 C 2009 Biometrika Trust Printed in Great Britain doi: 10.1093/biomet/asn059 Advance Access publication 22 January 2009 A note on profile likelihood for exponential

More information

Outline of GLMs. Definitions

Outline of GLMs. Definitions Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density

More information

INFERENCE FOR MULTIPLE LINEAR REGRESSION MODEL WITH EXTENDED SKEW NORMAL ERRORS

INFERENCE FOR MULTIPLE LINEAR REGRESSION MODEL WITH EXTENDED SKEW NORMAL ERRORS Pak. J. Statist. 2016 Vol. 32(2), 81-96 INFERENCE FOR MULTIPLE LINEAR REGRESSION MODEL WITH EXTENDED SKEW NORMAL ERRORS A.A. Alhamide 1, K. Ibrahim 1 M.T. Alodat 2 1 Statistics Program, School of Mathematical

More information

Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution

Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution Biswabrata Pradhan & Debasis Kundu Abstract In this article the Bayes estimates of two-parameter gamma distribution is considered.

More information

inferences on stress-strength reliability from lindley distributions

inferences on stress-strength reliability from lindley distributions inferences on stress-strength reliability from lindley distributions D.K. Al-Mutairi, M.E. Ghitany & Debasis Kundu Abstract This paper deals with the estimation of the stress-strength parameter R = P (Y

More information

Nonconcave Penalized Likelihood with A Diverging Number of Parameters

Nonconcave Penalized Likelihood with A Diverging Number of Parameters Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized

More information

Mean Likelihood Frequency Estimation

Mean Likelihood Frequency Estimation IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 7, JULY 2000 1937 Mean Likelihood Frequency Estimation Steven Kay, Fellow, IEEE, and Supratim Saha Abstract Estimation of signals with nonlinear as

More information

A Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets

A Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets A Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets George A. Hagedorn Happy 60 th birthday, Mr. Fritz! Abstract. Although real, normalized Gaussian wave packets minimize the product

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology

More information

Estimating the numberof signals of the damped exponential models

Estimating the numberof signals of the damped exponential models Computational Statistics & Data Analysis 36 (2001) 245 256 www.elsevier.com/locate/csda Estimating the numberof signals of the damped exponential models Debasis Kundu ;1, Amit Mitra 2 Department of Mathematics,

More information

SECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes.

SECTION A. f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes. SECTION A 1. State the maximal domain and range of the function f(x) = ln(x). Sketch the graph of y = f(x), indicating the coordinates of any points where the graph crosses the axes. 2. By evaluating f(0),

More information

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Sean P. Meyn Joint work with Darshan Shirodkar and Prashant Mehta Coordinated Science Laboratory and the Department of

More information

Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands

Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands Elizabeth C. Mannshardt-Shamseldin Advisor: Richard L. Smith Duke University Department

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Statistica Sinica 20 2010, 365-378 A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Liang Peng Georgia Institute of Technology Abstract: Estimating tail dependence functions is important for applications

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Bootstrap. Director of Center for Astrostatistics. G. Jogesh Babu. Penn State University babu.

Bootstrap. Director of Center for Astrostatistics. G. Jogesh Babu. Penn State University  babu. Bootstrap G. Jogesh Babu Penn State University http://www.stat.psu.edu/ babu Director of Center for Astrostatistics http://astrostatistics.psu.edu Outline 1 Motivation 2 Simple statistical problem 3 Resampling

More information

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Advances in Decision Sciences Volume, Article ID 893497, 6 pages doi:.55//893497 Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Junichi Hirukawa and

More information

CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD

CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD Vijay Singamasetty, Navneeth Nair, Srikrishna Bhashyam and Arun Pachai Kannu Department of Electrical Engineering Indian

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order

More information

Weighted Least Squares I

Weighted Least Squares I Weighted Least Squares I for i = 1, 2,..., n we have, see [1, Bradley], data: Y i x i i.n.i.d f(y i θ i ), where θ i = E(Y i x i ) co-variates: x i = (x i1, x i2,..., x ip ) T let X n p be the matrix of

More information

FOURIER TRANSFORMS OF STATIONARY PROCESSES 1. k=1

FOURIER TRANSFORMS OF STATIONARY PROCESSES 1. k=1 FOURIER TRANSFORMS OF STATIONARY PROCESSES WEI BIAO WU September 8, 003 Abstract. We consider the asymptotic behavior of Fourier transforms of stationary and ergodic sequences. Under sufficiently mild

More information

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication G. S. Maddala Kajal Lahiri WILEY A John Wiley and Sons, Ltd., Publication TEMT Foreword Preface to the Fourth Edition xvii xix Part I Introduction and the Linear Regression Model 1 CHAPTER 1 What is Econometrics?

More information

Applied Probability and Stochastic Processes

Applied Probability and Stochastic Processes Applied Probability and Stochastic Processes In Engineering and Physical Sciences MICHEL K. OCHI University of Florida A Wiley-Interscience Publication JOHN WILEY & SONS New York - Chichester Brisbane

More information

On prediction and density estimation Peter McCullagh University of Chicago December 2004

On prediction and density estimation Peter McCullagh University of Chicago December 2004 On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating

More information

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2 EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME Xavier Mestre, Pascal Vallet 2 Centre Tecnològic de Telecomunicacions de Catalunya, Castelldefels, Barcelona (Spain) 2 Institut

More information

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill November

More information

Multivariate Birnbaum-Saunders Distribution Based on Multivariate Skew Normal Distribution

Multivariate Birnbaum-Saunders Distribution Based on Multivariate Skew Normal Distribution Multivariate Birnbaum-Saunders Distribution Based on Multivariate Skew Normal Distribution Ahad Jamalizadeh & Debasis Kundu Abstract Birnbaum-Saunders distribution has received some attention in the statistical

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

A LIMIT THEOREM FOR THE IMBALANCE OF A ROTATING WHEEL

A LIMIT THEOREM FOR THE IMBALANCE OF A ROTATING WHEEL Sankhyā : The Indian Journal of Statistics 1994, Volume 56, Series B, Pt. 3, pp. 46-467 A LIMIT THEOREM FOR THE IMBALANCE OF A ROTATING WHEEL By K.G. RAMAMURTHY and V. RAJENDRA PRASAD Indian Statistical

More information

Analysis of Gamma and Weibull Lifetime Data under a General Censoring Scheme and in the presence of Covariates

Analysis of Gamma and Weibull Lifetime Data under a General Censoring Scheme and in the presence of Covariates Communications in Statistics - Theory and Methods ISSN: 0361-0926 (Print) 1532-415X (Online) Journal homepage: http://www.tandfonline.com/loi/lsta20 Analysis of Gamma and Weibull Lifetime Data under a

More information

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Chanseok Park 1 Department of Mathematical Sciences, Clemson University, Clemson, SC 29634 Short Title: Smooth

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

and Srikanth K. Iyer Department of Mathematics Indian Institute of Technology, Kanpur, India. ph:

and Srikanth K. Iyer Department of Mathematics Indian Institute of Technology, Kanpur, India. ph: A SIMPLE ESTIMATE OF THE INDEX OF STABILITY FOR SYMMETRIC STABLE DISTRIBUTIONS by S. Rao Jammalamadaka Department of Statistics and Applied Probability University of California, Santa Barbara, USA ph:

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression Simulating Properties of the Likelihood Ratio est for a Unit Root in an Explosive Second Order Autoregression Bent Nielsen Nuffield College, University of Oxford J James Reade St Cross College, University

More information

Department of Mathematics, Indian Institute of Technology, Kanpur

Department of Mathematics, Indian Institute of Technology, Kanpur This article was downloaded by: [SENACYT Consortium - trial account] On: 24 November 2009 Access details: Access Details: [subscription number 910290633] Publisher Taylor & Francis Informa Ltd Registered

More information

arxiv: v4 [stat.co] 18 Mar 2018

arxiv: v4 [stat.co] 18 Mar 2018 An EM algorithm for absolute continuous bivariate Pareto distribution Arabin Kumar Dey, Biplab Paul and Debasis Kundu arxiv:1608.02199v4 [stat.co] 18 Mar 2018 Abstract: Recently [3] used EM algorithm to

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity International Mathematics and Mathematical Sciences Volume 2011, Article ID 249564, 7 pages doi:10.1155/2011/249564 Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity J. Martin van

More information

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 14 GEE-GMM Throughout the course we have emphasized methods of estimation and inference based on the principle

More information

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

Monte Carlo Studies. The response in a Monte Carlo study is a random variable. Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating

More information

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series Willa W. Chen Rohit S. Deo July 6, 009 Abstract. The restricted likelihood ratio test, RLRT, for the autoregressive coefficient

More information

Hypothesis testing when a nuisance parameter is present only under the alternative - linear model case

Hypothesis testing when a nuisance parameter is present only under the alternative - linear model case Hypothesis testing when a nuisance parameter is present only under the alternative - linear model case By ROBERT B. DAVIES Statistics Research Associates imited PO Box 12 649, Thorndon, Wellington, New

More information

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Testing Some Covariance Structures under a Growth Curve Model in High Dimension Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping

More information

Character sums with Beatty sequences on Burgess-type intervals

Character sums with Beatty sequences on Burgess-type intervals Character sums with Beatty sequences on Burgess-type intervals William D. Banks Department of Mathematics University of Missouri Columbia, MO 65211 USA bbanks@math.missouri.edu Igor E. Shparlinski Department

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

A Comparison of Robust Estimators Based on Two Types of Trimming

A Comparison of Robust Estimators Based on Two Types of Trimming Submitted to the Bernoulli A Comparison of Robust Estimators Based on Two Types of Trimming SUBHRA SANKAR DHAR 1, and PROBAL CHAUDHURI 1, 1 Theoretical Statistics and Mathematics Unit, Indian Statistical

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Bahadur representations for bootstrap quantiles 1

Bahadur representations for bootstrap quantiles 1 Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by

More information

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and

More information

INSTRUMENTAL-VARIABLE CALIBRATION ESTIMATION IN SURVEY SAMPLING

INSTRUMENTAL-VARIABLE CALIBRATION ESTIMATION IN SURVEY SAMPLING Statistica Sinica 24 (2014), 1001-1015 doi:http://dx.doi.org/10.5705/ss.2013.038 INSTRUMENTAL-VARIABLE CALIBRATION ESTIMATION IN SURVEY SAMPLING Seunghwan Park and Jae Kwang Kim Seoul National Univeristy

More information

An Introduction to Multivariate Statistical Analysis

An Introduction to Multivariate Statistical Analysis An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents

More information

Econ 583 Final Exam Fall 2008

Econ 583 Final Exam Fall 2008 Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random

More information

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition Preface Preface to the First Edition xi xiii 1 Basic Probability Theory 1 1.1 Introduction 1 1.2 Sample Spaces and Events 3 1.3 The Axioms of Probability 7 1.4 Finite Sample Spaces and Combinatorics 15

More information

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,

More information

Progress In Electromagnetics Research, PIER 102, 31 48, 2010

Progress In Electromagnetics Research, PIER 102, 31 48, 2010 Progress In Electromagnetics Research, PIER 102, 31 48, 2010 ACCURATE PARAMETER ESTIMATION FOR WAVE EQUATION F. K. W. Chan, H. C. So, S.-C. Chan, W. H. Lau and C. F. Chan Department of Electronic Engineering

More information

NEW ESTIMATORS FOR PARALLEL STEADY-STATE SIMULATIONS

NEW ESTIMATORS FOR PARALLEL STEADY-STATE SIMULATIONS roceedings of the 2009 Winter Simulation Conference M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, eds. NEW ESTIMATORS FOR ARALLEL STEADY-STATE SIMULATIONS Ming-hua Hsieh Department

More information

PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS

PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS Statistica Sinica 15(2005), 831-840 PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS Florin Vaida University of California at San Diego Abstract: It is well known that the likelihood sequence of the EM algorithm

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Beyond stochastic gradient descent for large-scale machine learning

Beyond stochastic gradient descent for large-scale machine learning Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines - October 2014 Big data revolution? A new

More information

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

Minimax design criterion for fractional factorial designs

Minimax design criterion for fractional factorial designs Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information