On Least Absolute Deviation Estimators For One Dimensional Chirp Model
|
|
- Sarah Farmer
- 5 years ago
- Views:
Transcription
1 On Least Absolute Deviation Estimators For One Dimensional Chirp Model Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract It is well known that the least absolute deviation (LAD) estimators are more robust than the least squares estimators particularly in presence of heavy tail errors. We consider the LAD estimators of the unknown parameters of one dimensional chirp signal model under independent and identically distributed error structure. The proposed estimators are strongly consistent and it is observed that the asymptotic distribution of the LAD estimators are normally distributed. We perform some simulation studies to verify the asymptotic theory for small sample sizes and the performance are quite satisfactory. Key Words and Phrases: Chirp signals; least absolute deviation estimators; strong consistency, asymptotic distribution. Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Pin 0806, India. Corresponding author, Phone: , Fax:
2 Introduction Let us consider the following chirp signal model; y(n) = A 0 cos(α 0 n + β 0 n ) + B 0 sin(α 0 n + β 0 n ) + X(n); n =,,. () Here y(n) is the real valued signal observed at n =,,. A 0, B 0 are amplitudes, and α 0 and β 0 are frequency and frequency rate respectively. The additive error {X(n)} is a sequence of independent and identically distributed (i.i.d.) random variables with mean zero and finite second moment. The explicit assumptions on X(n)s will be provided later. In signal processing literature, chirp signal models are used to detect an object with respect to a fixed receiver. Such models are typically one-dimensional chirp model as described in (), where the dimension is usually the time. In this model, frequency varies with time in a non-linear fashion like a quadratic function and it is this property that has been exploited for measuring the distance of an object from a fixed receiver. In various areas of science and engineering, for example in sonar, radar and communications systems, such models are used. Oceanography and geology are some other areas where this model has been used quite extensively. On this model () or on its variations, extensive work has been done by several authors, see for example Abatzoglou (986), Kumaresan and Verma (987), Djuric and Kay (990), Gini, Montanari and Verrazani (000),Saha and Kay (00), andi and Kundu (004), Kundu and andi (008) and the references cited therein. andi and Kundu (004) first established the consistency and asymptotic normality property of the LSE of the one dimensional (D) chirp signal model for i.i.d. errors. The authors, see Kundu and andi (008), extended the results when X(n) s are obtained from a linear stationary processes. But there is no discussion about any method like least absolute deviation (LAD) estimation which is well known to be more robust than the LSEs, particularly in presence of outliers.
3 Unfortunately, the model does not satisfy the assumption B5 of Oberhofer (98) and therefore the strong consistency of the LAD estimators in this case is not immediate. It may be mentioned that even the ordinary sinusoidal model does not satisfy the assumption B5 of Oberhofer (98), and in that case Kim et al. (000) provided the consistency and asymptotic normality results of the LAD estimators. The main aim of this paper is to provide the consistency and asymptotic normality properties of the LAD estimators of the unknown parameters of model (). It is known that the LSE of α 0 has the convergence rate O p ( 3/ ), whereas the LSE of β 0 has the convergence rate O p ( 5/ ), see andi and Kundu (004). Here z = O p ( δ ) means z δ is bounded in probability. In this paper it is observed that the LAD estimators of α 0 and β 0 have the same rates of convergence as the corresponding LSEs. But it is observed that asymptotic efficiency of LAD estimators relative to LSE is 4f(0) σ, here f( ) is the probability density function (PDF) of the error random variable X(n). Therefore it is clear that LAD estimators are more efficient than LSEs for heavy tailed error distributions. We perform some extensive simulation experiments to study the effectiveness of the LAD estimators for finite samples, and it is observed that the performances of the LAD estimators are quite satisfactory. The rest of the paper is organized as follows. In Section, we mainly provide the model assumptions and methodology. In Section 3 the strong consistency and asymptotic normality of LAD estimators are provided. umerical results are presented in Section 4, and finally we conclude the paper in Section 5. Model Assumptions and Preliminary Results. Model Assumptions We make the following assumptions on the error random variables. 3
4 Assumption : The error random variable X(n) satisfies the following conditions; {X(n)} is a sequence of i.i.d. absolute continuous random variables with mean zero, variance σ, and has the PDF f( ). It is further assumed that f( ) is symmetric and differentiable in (0,ǫ) and ( ǫ, 0) for some ǫ > 0 and f(0) > 0. We use the following notations; F( ) the cumulative distribution function corresponds to f( ). The parameter vector θ = (A,B,α,β), the true parameter vector θ 0 = (A 0,B 0,α 0,β 0 ), and the parameter space Θ = [ M,M [ M,M [0,π [0,π. Assumption : It is assumed that θ 0 is an interior point of Θ.. Least Absolute Deviation Estimation Procedure In this section we propose the LAD estimation procedure to estimate the unknown parameters of the model (). The LAD estimators are obtained by minimizing Q(θ), with respect to θ, where, Q(θ) = we note that y(n) ( A cos(αn + βn ) + B sin(αn + βn ) ) () n= Q(A,B,α,β) > Q(Â(α,β), B(α,β),α,β) > Q(Â( α, β), B( α, β), α, β) where Â(α,β), B(α,β) are the minimizer of Q(A,B,α,β) for known α,β and Â( α, β), B( α, β) are the minimizer of Q(A,B, α, β). ow ( α, β) = arg minq(â(α,β), B(α,β),α,β). So, LAD estimators of θ 0 will be θ = (Â( α, β), B( α, β), α, β) = (Â, B, α, β). 4
5 3 Asymptotic Properties of Least Absolute Deviation Estimators 3. Strong Consistency ow we will provide the consistency results for the proposed estimators. Theorem. If the Assumptions - are satisfied then (Â, B, α, β) is a strongly consistent estimator of (A 0,B 0,α 0,β 0 ). We need the following results to prove Theorem. Lemma. If (θ,θ ) in (0,π) (0,π), t = 0,, then except for countable number of points the followings are true. (i) (ii) lim n= lim lim lim cos(θ n + θ n ) = lim t+ t+ t+ sin(θ n + θ n ) = 0. (3) n= n t cos (θ n + θ n ) = n= n t sin (θ n + θ n ) = n= (t + ) (4) (t + ). (5) n t sin(θ n + θ n ) cos(θ n + θ n ) = 0. (6) n= Proof: Using the result of Vinogradov (954) Lemma can be easily established. Lemma. If, D(θ) = Q(θ) Q(θ 0 ), then D(θ) lim E[ D(θ) 0 a.s.uniformly θ Θ. Proof: Let us denote W n (θ) = h n (θ)+x(n) X(n). Then D(θ) = W n (θ). n= We note that W n (θ) = h n (θ) + X(n) X(n) h n (θ) 4M, as the parameter 5
6 space is compact. Also W n (θ)s are independent and non identically distributed random variables with E[W n (θ) < and V [W n (θ) <. It may be easily seen similarly as in Oberhofer (98) that these bounds do not depend on n. Since Θ is a compact set, there exists Θ,, Θ K, such that Θ = K i=θ i and on each Θ i, sup W n (θ) inf W n (θ) < ǫ 4 n. a.s. ow for θ Θ i, = D(θ) lim E[ D(θ) [ [ W n (θ) E sup W n (θ) + n= = A(θ) + B(θ) n= where, A(θ) = W n (θ) n= n= n= sup[ W n (θ) n= n= sup W n (θ) n= E sup W n (θ) lim E[ D(θ) E sup W n (θ) n= E sup W n (θ) E sup W n (θ). ote that sup W n (θ) s are independent and non identically distributed random variables with finite mean and variance, and the variance is bounded by a quantity not depending on n. Applying Kolmogorov s strong law of large numbers, choose 0i large enough, so that for 0i, A(θ) < ǫ 3 a.s., uniformly θ Θ i. ow B(θ) = = n= n= = C(θ) + D(θ), E sup W n (θ) lim E[ D(θ) E sup W n (θ) E lim [ D(θ) using DCT where C(θ) = n= E sup W n (θ) E lim sup W n (θ), and DCT stands for dominated convergence theorem. We take U (θ) = n= n= E sup W n (θ) and we want to apply DCT to pass the limit inside the expectation and we get i 6
7 such that C(θ) < ǫ. Further, note that 3 D(θ) = E lim = E lim E lim lim n= sup W n (θ) E lim [ D(θ) sup W n (θ) E lim W n (θ) n= n= sup W n (θ) E lim n= ǫ 4 n = 0. n= n= inf W n (θ) Combining we get D(θ) lim E[ D(θ) 0 a.s.uniformly θ Θ. Lemma 3. The global minimum of lim E[ D(θ) is attained at θ0. Proof: At θ 0 the value of lim E[ D(θ) is zero, and for θ θ0, if we can show lim E[ D(θ) > 0 then we are through. To achieve that we verify the assumptions B7, B8, B9 of Lemma 4 by Oberhofer (98). For convenience we reproduce the assumptions B7, B8, B6 as A, A, A3 respectively, below. A: For every closed set Θ 0 not containing θ 0, there exist numbers c > 0, d > 0, 0 > 0 such that for all θ Θ 0 and all 0, {n : n 0, h n (θ) c} / d > 0. A: For every c > 0, there exists a real number d > 0, such that for all n min[f n (c) /, / F n ( c) d > 0 A3: There exists e > 0 and 0 such that for all 0, Q = inf Θ 0 h n (θ) min[f n (c) /, / F n ( c) e > 0 n= Lemma 4 of Oberhofer (98) states that A3 is fulfilled if A and A holds. ote that Lemma of Oberhofer (98) gives D(θ) Q. Then it is enough to show lim E[ D(θ) lim Q > 0. ow, lim Q > 0 condition is same as A3. Using Lemma 4 of Oberhofer (98) instead of A3 we try to show A and A. If f(0) > 0 7
8 then A is automatically satisfied. It remains to show that A is satisfied in our case. If there exists c > 0 such that inf h n (θ) c > 0 for all 0 then Θ 0 n= A will be satisfied. Let us consider Θ 0 = S c = {θ : θ θ 0 3c > 0} S A c S B c S (α,β) c (7) where S A c = {θ : A A 0 c > 0} {θ : A A 0 c, (α,β) = (α 0,β 0 )} {θ : A A 0 c, (α,β) (α 0,β 0 )}, S B c = {θ : B B 0 c > 0} {θ : B B 0 c, (α,β) = (α 0,β 0 )} {θ : B B 0 c, (α,β) (α 0,β 0 )}, S (α,β) c = {θ : (α,β) (α 0,β 0 ) c > 0} ow on the set {θ : A A 0 c, (α,β) = (α 0,β 0 )} Case-, if B B 0 = 0, then h n (θ) = (A A 0 ) cos(α 0 n + β 0 n ) and, lim inf n= h n (θ) = A A 0 lim inf A A 0 lim cos(α 0 n + β 0 n ) n= (cos(α 0 n + β 0 n )) n= = A A 0 c > 0 Case-, if B B 0 0, then h n (θ) = (A A 0 ) cos(α 0 n + β 0 n ) + (B B 0 ) sin(α 0 n + β 0 n ) = r cos(ω) cos(α 0 n + β 0 n ) + r sin(ω) sin(α 0 n + β 0 n ) for some r > 0, ω = r cos(α 0 n + β 0 n ω) 8
9 So, lim inf n= h n (θ) = r lim inf r lim = r > 0 cos(α 0 n + β 0 n ω) n= (cos(α 0 n + β 0 n ω)) n= On the set {θ : A A 0 c, (α,β) (α 0,β 0 )} h n (θ) = A cos(αn + βn ) + B sin(αn + βn ) A 0 cos(α 0 n + β 0 n ) B 0 sin(α 0 n + β 0 n ) = r cos(αn + βn ω) r 0 cos(α 0 n + β 0 n ω 0 ) for some r,r 0 > 0, ω,ω 0 We recall that h n (θ) 4M. Then 0 and r0 4M = R0 > 0. Then lim inf 4M lim inf n= = 4M lim = 4M R + R 0 ( ) hn (θ) 4M h n (θ) = 4M lim inf n= ( ) hn (θ) 4M h n (θ) 4M n= <. We denote r 4M = R > h n(θ) 4M [R cos(αn + βn ω) R 0 cos(α 0 n + β 0 n ω 0 ) n= > 0 Similarly on other sets lim inf θ θ 0. n= h n (θ) > 0. So, we get lim E[ D(θ) > 0 for Proof of Theorem : ow to prove the strong consistency of the LAD estimators, first let us observe that the minimizer of Q(θ) will be same as the minimizer of D(θ) = Q(θ) Q(θ 0 ). So we develop our result based on minimizer of D(θ) instead of Q(θ). ote that Q(θ) = y(n) ( A cos(αn + βn ) + B sin(αn + βn ) ) = h n (θ) + X(n) n= n= 9 (8)
10 where h n (θ) = A 0 cos(α 0 n+β 0 n )+B 0 sin(α 0 n+β 0 n ) A cos(αn+βn ) B sin(αn+βn ) and note that h n (θ) 4M for θ Θ and Q(θ 0 ) = shown that X(n). In Lemma we have D(θ) lim E[ D(θ) 0 a.s. uniformly θ Θ. [ and in Lemma 3 we have shown that θ 0 is the global minimizer of lim E D(θ). Therefore, by Lemma of Jennrich (969) or by Lemma. of White (980) we can conclude that minimizer of D(θ) is a strong consistent estimators of θ 0. n= 3. Asymptotic ormality ow we want to show that the estimators obtained have the following asymptotic ( ) normality result. Let us take D = diag,,, Theorem. If the Assumptions - are satisfied then here, Σ = A 0 + B 0 ) ( θ θ 0 )D d 4 (0, f(0) Σ (9) ( ) A 0 + 9B 0 4A 0 B 0 8B 0 5B 0 ( ) 4A 0 B 0 9A 0 + B 0 8A 0 5A 0 8B 0 8A , (0) 5B 0 5A d means converges in distribution, Proof: We recall that Q(θ) is not a differentiable function, to find the asymptotic distribution of θ, we want to approximate Q(θ) by Q(θ) with some nice property (differentiability). For that purpose we need to approximate x by some nice function ρ (x) near zero, such that lim ρ (x) = x. Let us consider the interval near 0
11 zero as ( γ, γ ) where γ is an increasing function of satisfying lim = 0. γ Let us approximate x by a polynomial. We want to approximate x separately in ( γ, 0) and (0, γ ). In each of these intervals we observe that the degree of the polynomial has to be at least 3 to make the approximating function twice continuously differentiable. If possible the degree of the polynomial is less than 3, say and it is P(x) = Ax + Bx + C. Then P ( γ ) = A should match with the second derivative of x at boundary point γ, which is zero. In that case A = 0 makes polynomial degree, if not then there will be a jump discontinuity at γ for the function P (x). So, let the approximating polynomial is P(x) = Ax 3 + Bx + Cx+D in (0, γ ). As x is symmetric about zero the approximating polynomial in ( γ, 0) will be P(x) = Ax 3 + Bx Cx + D. ow to find the coefficients of the polynomial we match the function value and its derivatives at the joining points. P( γ ) = γ gives P ( γ ) = gives P ( γ ) = 0 gives A γ 3 + B γ 3A γ and P (0) agrees from both parts of the polynomial giving + C γ + D = γ () + B γ + C = () 6A γ + B = 0 (3) C = 0. (4) Solving previous four equations we get the suitable cubic spline as ρ (x) = [ 3 γ x 3 + γ x + I + xi 3γ 0<x x> γ γ ρ ( x) = ρ (x) which is symmetric, twice continuously differentiable and γ is an increasing function of satisfying some extra conditions, = o(γ 3 ), γ = o() and <, γ =
12 which we will be needing later. After getting the nice function ρ (x) we now define Q(θ) = ρ (h n (θ) + X(n)) (5) n= and note that Q(θ 0 ) = n= ρ (X(n)). ow we want to prove the following two results (Lemma 4 and Lemma 5) which when combined will give the required asymptotic normality result. P Lemma 4. If the Assumptions - are satisfied then ( θ θ)d means convergence in probability. 0 where P Lemma 5. If the Assumptions - are satisfied, then θ, the minimizer of Q(θ) has the following asymptotic distribution ( θ θ 0 )D d 4 (0, f(0) Σ) To prove Lemma 4 and Lemma 5 we need some more lemmas. Lemma 6. sup( Q(θ) Q(θ)) = o P () and sup θ Θ θ Θ Q(θ) Q(θ) 0 a.s. where o P () means converges to zero in probability. Proof: To calculate the following quantity Q(θ) Q(θ). we write explicitly the function ρ (x) x. ρ (x) x = [ 3 γ x 3 + γ x x + 3γ [ + 3 γ x 3 + γ x + x + 3γ I 0<x γ I x 0 γ
13 and we note that ρ (x) x C γ. ow P( Q(θ) Q(θ) > ǫ) C γ = C γ = C γ = C γ E Q(θ) Q(θ) ǫ n= n= n= n= = C γ C 3 γ EI 0< h n(θ)+x(n) γ ( P 0 < h n (θ) + X(n) γ P( h n (θ) γ X(n) h n (θ) + γ ) F( h n (θ) + γ ) F( h n (θ) γ ) f( h n (θ)) using mean value theorem n= 0 as. So, we get Q(θ) Q(θ) = o P () and hence sup( Q(θ) Q(θ)) = o P () as Θ is compact. θ Θ Also we note that P( Q(θ) Q(θ) > ǫ) C 3 γ and ) P( Q(θ) Q(θ) > ǫ) C γ = = < implies Q(θ) Q(θ) 0 a.s. and hence sup θ Θ Q(θ) Q(θ) 0 a.s. Lemma 7. θ, the minimizer of Q(θ) is strong consistent estimator of θ 0. Proof: We take W n (θ) = ρ (h n (θ)+x(n)) ρ (X(n)). Then D(θ) = W n (θ) and D(θ) = Q(θ) Q(θ 0 ). As before θ is also minimizer of D(θ). Then we proceed with exactly same technique as that used for proving strong consistency of ˆθ and we finally get D(θ) lim E[ D(θ) 0 a.s.uniformly θ Θ. ow at θ 0 the value n= 3
14 of lim E[ D(θ) is zero. And for θ θ 0, lim E[ D(θ) = lim E[ D(θ) D(θ) + lim E[ D(θ) = lim E[ Q(θ) Q(θ) lim E[ Q(θ 0 ) Q(θ0 ) + lim E[ D(θ) The first two terms converge to zero, using Lemma 6 we get lim E[ D(θ) > 0 for θ θ 0. So, θ is strong consistent estimator of θ 0. Let us denote Q (θ) as the 4 first derivative vector and Q (θ) as the 4 4 second derivative matrix of Q(θ). To get explicit expressions of Q (θ) and Q (θ), let us write explicitly the functions ρ (x), ρ (x) and ρ (x). ρ (x) = + [ 3 γ x 3 + γ x + 3γ [ 3 γ x 3 + γ x + 3γ I + xi 0<x x> γ γ I xi x 0 x< γ γ ρ (x) = [ γx + γ x I + I 0<x x> γ γ + [ γx + γ x I I x 0 x< γ γ ρ (x) = [ γx + γ I + [ γ 0<x x + γ I x 0 γ γ Lemma 8. D Q (θ 0 )D converges to f(0)σ which is a positive definite matrix, in probability. Proof: First we note that Q (θ 0 ) depends on. Step- Let us calculate the quantity Eρ (X(n)). We want to show E[ ρ (X(n)) = f(0) + o() n= Also we recall that X(n) s are i.i.d. and have symmetric density function f with f(0) <. ow f is differentiable in (0, γ ) and ( γ, 0) for sufficiently large. 4
15 ote that in that case f is bounded in (0, γ ), say less than M. E[ρ (X(n)) = γ 0 γ = = 4 0 γ 0 [ γ x + γ f(x)dx + 0 [ γ x + γ f(x)dx [ γ x + γ f(x)dx Integration by parts gives this is equal to, [ 4 f(x) ( γx + γ )dx f (x)[ γ γ [ γ x + γ f(x)dx x + γ γ xdx 0 = 4[ f( γ ) R = f(0) + (f( γ ) f(0)) R where, [ R = 4 4M [ = 4M [γ f (x)[γ [γ x γ γ xdx x γ xdx x 3 6 γ 4M [ 6γ + γ γ x 0 0 γ 0 0 and (f( γ ) f(0)) 0 as. Step- ext we want to show V [ ρ (X(n)) = o(), using Step-, which is equivalent to [ E ρ (X(n)) = 4f (0) + o(). n= For variance calculation let us write the expression for [ρ (x). n= [ρ (x) = [ 4γ 4 x 8γ 3 x + 4γ I 0<x γ + [ 4γx 4 + 8γx 3 + 4γ I x 0 γ 5
16 which is an even function. Then, = E [ρ (X(n)) γ 0 γ = 8 0 γ 0 C 0 γ. [ 0 [ 4γ 4 x 8γx 3 + 4γ f(x)dx + 4γ 4 x + 8γx 3 + 4γ f(x)dx γ [ 4γ 4 x 8γx 3 + 4γ f(x)dx γ 4 x + γ dx Therefore, using the above and independence of X(n) we get E[ ρ (X(n)) = 4f (0) + o() n= Step-3: Let us consider the (,)-th element of lim D Q (θ 0 )D, which is lim ρ (X(n)) cos (α 0 n + β 0 n ), n= Using Step- and Step-, and by applying Chebychev s inequality we get B 0 0 B 0 3 D Q (θ 0 )D = P 0 A0 A0 3 f(0) B 0 A 0 +B 0 A 0 +B 0 + o P() = f(0)σ + o P () A0 B 0 A A 0 +B A 0 +B 0 5 Then D Q (θ 0 )D converges to f(0)σ, which is a positive definite matrix, in probability. Lemma 9. If θ is a function of X(),,X(), such that θ θ 0 a.s. as then, D[ Q ( θ) Q (θ 0 )D 0 a.s. Proof: To calculate D[ Q ( θ) Q (θ 0 ) let us consider [ρ (h n ( θ) + X(n)) ρ (X(n)). n= ow note that as, θ θ 0 a.s. This implies h n ( θ) 0 a.s. n as h n (θ) is a continuous function of θ which implies for fixed n, lim k P( n=k h n ( θ) < ǫ ) = ǫ. 6
17 ow h n ( θ) + X(n) X(n) a.s. n and ρ (.) is a continuous function, then given ǫ > 0 ǫ > 0 such that for fixed n, h n ( θ) < ǫ ρ (h n ( θ) + X(n)) ρ (X(n)) < ǫ. which implies, ρ (h n( θ) + X(n)) ρ (X(n)) 0 a.s. And using this fact we get, D[ Q ( θ) Q (θ 0 )D 0 a.s. Proof of Lemma 4 Step- By definition of θ, Q( θ) Q( θ) > 0. Adding to both sides, Q( θ) Q( θ), which is again > 0 by definition of θ, we get, Q( θ) Q( θ) + Q( θ) Q( θ) > Q( θ) Q( θ) (6) By Lemma 6 the left hand side of (6) is o P (). So is the right hand side. i.e. Q( θ) Q( θ) = o P () Step- ow by Taylor series expansion of Q around θ Q( θ) Q( θ) = ( θ θ) Q ( θ) + ( θ θ) Q (θ )( θ θ) T. By definition of θ, Q ( θ) = 0, So, Q( θ) Q( θ) = ( θ θ)d [D Q (θ )DD ( θ θ) T (7) where, θ is a point on line joining θ and θ. We note that ˆθ θ 0 a.s. and θ θ 0 a.s. Then θ θ 0 a.s. So, using Lemma 9 lim D Q (θ )D converges in probability to a positive definite matrix and that implies its minimum eigen value, say λ is strictly positive. By using step-, the left hand side of (7) is o P (). Then P which implies ( θ θ)d 0. ( θ θ)d D ( θ θ) T < o P() λ Lemma 0. [ Q (θ 0 )D d (0, Σ ). 7
18 Proof of Lemma 0 Q (θ 0 )D = cos(α 0 n + β 0 n )ρ (X(n)) n= sin(α 0 n + β 0 n )ρ (X(n)) n= n[a 0 sin(α 0 n + β 0 n ) B 0 cos(α 0 n + β 0 n )ρ (X(n)) n [A 0 sin(α 0 n + β 0 n ) B 0 cos(α 0 n + β 0 n )ρ (X(n)) n= n= For investigation about [ Q (θ 0 )D let us concentrate on ρ (X(n)). We note that Eρ (X(n)) = 0 as ρ (x) is an odd function and X(n) has symmetric density f around zero. This gives E[ Q (θ 0 )D = 0. ow to calculate V ρ (X(n)) let us consider the function [ρ (x). then [ρ (x) = [ γx 4 4 4γx γx I + I 0<x x> γ γ + [ γx γx γx I + I x 0 x< γ γ = + [ [ γ 4 x 4 4γ 3 x 3 + 4γ x I 0<x γ + [ γx γx γx I, x 0 γ V [ρ (X(n)) = E[ρ (X(n)) = + R 0, where for some constant C, R 0 C γ 0 as. Using above calculated variance, the elements of [ Q (θ 0 )D satisfies the conditions of the Central Limit Theorem by Fuller(996). [ To find the asymptotic variance of cos(α 0 n + β 0 n )ρ (X(n)) we need to calculate for h = 0, ±, ±, lim h n= n= cos(α 0 n + β 0 n ) cos(α 0 (n + h) + β 0 (n + h) ). Using Lemma, and after some calculations it can be shown that for h = 0 lim h n= cos(α 0 n + β 0 n ) cos(α 0 (n + h) + β 0 (n + h) ) = 8
19 and it is 0 otherwise. Therefore, using the Central Limit Theorem of linear processes, see Fuller (996, page 3), the variance turns out to be. To find the variance [ of sin(α 0 n + β 0 n )ρ (X(n)) we need to calculate the above limits where n= both the cos terms are replaced by sin terms, and we will get similar result using Lemma. ow for all h, and for t = 0,,, we also get lim h t+ n= n t cos(α 0 n + β 0 n ) sin(α 0 (n + h) + β 0 (n + h) ) = 0 and the variance-covariances of the other terms can be obtained along the same line, using these limits. Finally we get [ Q (θ 0 )D d (0, Σ ). Proof of Lemma 5: Using multivariate Taylor series expansion we have Q ( θ) Q (θ 0 ) = ( θ θ 0 ) Q ( θ) (8) where θ is a point on line joining θ and θ 0. Since, Q ( θ) = 0, (8) can be written as Q (θ 0 )D = ( θ θ 0 )D [D Q ( θ)d (9) ote that [ D Q [ ( θ)d = D Q ( θ) Q [ (θ 0 ) D + D Q (θ 0 )D Using Lemma 9 and Lemma 8 we get in probability as i.e., θ θ 0 a.s. (9) gives, [ D Q [ ( θ)d lim D Q (θ 0 )D = f(0)σ Using Lemma 0 we obtain [ Q (θ 0 )D ( θ θ 0 )D = [ Q (θ 0 )D[D Q ( θ)d (0) d (0, Σ ). So, from (0), combining above two observations we will get the asymptotic distribution of ( θ θ 0 )D. Dividing by the expression becomes ( θ θ 0 )( D) = [ Q (θ 0 )D[D Q ( θ)d. () Then ( θ θ 0 )( D) 0 in probability, () 9
20 Theorem 3. If the Assumptions - are satisfied, then ( θ θ 0 )(D ) 0 in probability. Proof: ote that Σ = 4Σ. As lim D Q (θ 0 )D = f(0)σ and [ Q (θ 0 d )D (0, Σ ) then by (0), ( θ θ 0 )D d 4 (0, Σ 4f(0) ) = 4 (0, Σ). We note that combining Lemma 5 and Lemma 4 we get ( θ θ 0 )D d f(0) 4 (0, f(0) Σ). Using Lemma 4 we get ( θ θ)( D) 0 in probability. This along with () gives ( θ θ 0 )( D) 0 in probability. 4 umerical Results and Data Analysis 4. umerical Results: In this section we perform some simulation experiments to see how the LAD estimators behave for different sample sizes. We consider the following model parameters: A =.0,B =.0,α =.75,β =.05. X(n) s are assumed to be i.i.d. Gaussian random variables with mean 0 and variance σ. We have taken different sample sizes namely n = 5, 50, 75, 00 and σ = for our simulation experiments. We compute the average estimates (MEA), mean squared errors (MSE), variance (VAR) over 000 replications, and we also provide the asymptotic variances (ASYV) for comparison purposes. We further calculate the asymptotic confidence length (ACO) and coverage probability (CP). To calculate LAD estimators we use the methodology given in Section 3.. umerically the minimum value has been obtained by using the Downhill Simplex Algorithm, see for example Press et al. (996). Some of the points are quite clear from the simulation experiments. It is observed that as sample size increases the MSEs, variances and the biases decrease. It verifies the consistency properties of the LAD estimators. The asymptotic variances of the LAD estimators and the MSE s of the different estimators obtained over 000 0
21 Table : The results for LADs are reported, when n = 5, σ = PARA MEA MSE (.0433) ( ) ( ) ( ) VAR ( ) (.35586) ( ) ( 0.040) ASYV ( ) ( ) ( ) ( ) ACO ( ) ( ) ( ) ( ) CP ( ) ( ) ( ) ( ) Table : The results for LADs are reported, when n = 50, σ = PARA MEA MSE ( ) ( ) ( ) ( ) VAR ( ) ( ) ( ) ( ) ASYV ( ) ( ) ( ) ( ) ACO (.7344) (.4938) ( ) ( ) CP ( ) ( ) ( ) ( ) replications are quite close to each other particularly for large sample sizes. So the performances of the LAD estimators are quite satisfactory. 5 Conclusion In this paper we consider the least absolute deviation estimators for parameters of one dimensional chirp signal. It is observed that the LAD estimators are strongly consistent. Also we found the joint asymptotic normal distribution of the estimators. It is observed that LAD estimators are more efficient than LSE s in presence of additive heavy tailed errors. Acknowledgement Part of this work of the first author has been financially supported by the Center for Scientific and Industrial Research (CSIR), and part of this work of the second and
22 Table 3: The results for LADs are reported, when n = 75, σ = PARA MEA MSE ( ) ( 0.05) ( ) ( ) VAR ( 0.649) ( 0.05) ( ) ( ) ASYV ( ) ( ) ( ) ( ) ACO (.77607) ( ) ( ) ( ) CP ( ) ( ) ( ) ( ) Table 4: The results for LADs are reported, when n = 00, σ = PARA MEA MSE ( ) ( ) ( ) ( ) VAR ( ) ( ) ( ) ( ) ASYV ( ) ( ) ( ) ( ) ACO (.55703) (.5475) ( ) ( ) CP ( ) ( ) ( ) ( ) third authors have been supported by a grant from the Department of Science and Technology, Government of India. The authors would like to thank the referees for their constructive comments, which had helped to improve the earlier version of the manuscript. References [ Abatzoglou, T. (986), Fast maximum likelihood joint estimation of frequency and frequency rate, IEEE Transactions on Aerospace and Electronic Systems, vol., [ Djuric, P.M. and Kay, S.M. (990), Parameter estimation of chirp signals, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, 8-6. [3 Fuller, W. A. (996), Introduction to statistical Time Series, second Edition, John Wiley and Sons, ew York.
23 [4 Gini, F., Montanari, M. and Verrazzani, L. (000), Estimation of chirp signals in compound Gaussian clutter: A cyclostationary approach, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 48, [5 Jennrich, R.I. (969), Asymptotic properties of non-linear least square estimators, The Annals of Mathematical Statistics, vol-40, [6 Kim, T.S., Kim, H.K. and Choi, H.C. (000), Asymptotic properties of LAD estimators of a nonlinear time series regression model, Journal of Korean Statistical Society, vol. 9,, [7 Kumaresan, R. and Verma, S. (987), On estimation the parameters of chirp signals using rank reduction techniques,proceedings of st Asilomar Conference, , Pacific Grove, California. [8 Kundu, D. and andi, S. (008), Parameter estimation of chirp signals in presence of stationary noise, Statistica Sinica, vol. 8, [9 andi, S. and Kundu, D. (004), Asymptotic properties of the least squares estimators of the parameters of the chirp signals, Annals of the Institute of Statistical Mathematics, vol. 56, [0 Oberhofer, W. (98), The consistency of nonlinear regression minimizing the L norm, Annals of Statistics, vol. 0, [ Saha, S. and Kay, S. (00), Maximum likelihood parameter estimation of superimposed chirps using Monte Carlo importance sampling, IEEE Transactions on Signal Processing, vol. 50, [ Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P. (996) umerical Recipes in Fortran 90, Second Edition, Cambridge University Press. Page 40. [3 Vinogradov, I.M. (954), The method of trigonometrical sums in the theory of numbers, Interscience, Translated from Russian. Revised and annotated by K. 3
24 F. Roth and Anne Davenport. Reprint of the 954 translation. Dover Publications, Inc., Mineola, Y, 004. [4 White, H. (980), onlinear regression on cross-section data, Econometrica, vol. 48, [5 Weyl, H. (96), Ueber die Gleichverteilung von Zahlen mod. Eins, Math. Ann., vol. 77, [6 Wu, C. F. (98), Asymptotic theory of non-linear least-squares estimation, Annals of Statistics, vol. 9, o-3,
EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL
EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 Abstract. Chirp signals play an important role in the statistical signal processing.
More informationOn Parameter Estimation of Two Dimensional Chirp Signal
On Parameter Estimation of Two Dimensional Chirp Signal Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract Two dimensional (-D) chirp signals occur in different areas of image processing. In this paper,
More informationPARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE
Statistica Sinica 8(008), 87-0 PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Debasis Kundu and Swagata Nandi Indian Institute of Technology, Kanpur and Indian Statistical Institute
More informationPARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE
PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE DEBASIS KUNDU AND SWAGATA NANDI Abstract. The problem of parameter estimation of the chirp signals in presence of stationary noise
More informationOn Two Different Signal Processing Models
On Two Different Signal Processing Models Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 15, 2015 Outline First Model 1 First Model 2 3 4 5 6 Outline First Model 1
More informationAsymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal
Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Rhythm Grover, Debasis Kundu,, and Amit Mitra Department of Mathematics, Indian Institute of Technology Kanpur,
More informationAmplitude Modulated Model For Analyzing Non Stationary Speech Signals
Amplitude Modulated Model For Analyzing on Stationary Speech Signals Swagata andi, Debasis Kundu and Srikanth K. Iyer Institut für Angewandte Mathematik Ruprecht-Karls-Universität Heidelberg Im euenheimer
More informationESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL
1 ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL SWAGATA NANDI 1 AND DEBASIS KUNDU Abstract. In this paper, we propose a modification of the multiple sinusoidal model such that periodic
More informationAn Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals
isid/ms/8/ November 6, 8 http://www.isid.ac.in/ statmath/eprints An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals Swagata Nandi Anurag Prasad Debasis
More informationAn Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals
An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals Swagata Nandi 1 Debasis Kundu Abstract A computationally efficient algorithm is proposed for estimating the parameters
More informationAsymptotic properties of the least squares estimators of a two dimensional model
Metrika (998) 48: 83±97 > Springer-Verlag 998 Asymptotic properties of the least squares estimators of a two dimensional model Debasis Kundu,*, Rameshwar D. Gupta,** Department of Mathematics, Indian Institute
More informationAnalysis of Type-II Progressively Hybrid Censored Data
Analysis of Type-II Progressively Hybrid Censored Data Debasis Kundu & Avijit Joarder Abstract The mixture of Type-I and Type-II censoring schemes, called the hybrid censoring scheme is quite common in
More informationAnalysis of Middle Censored Data with Exponential Lifetime Distributions
Analysis of Middle Censored Data with Exponential Lifetime Distributions Srikanth K. Iyer S. Rao Jammalamadaka Debasis Kundu Abstract Recently Jammalamadaka and Mangalam (2003) introduced a general censoring
More informationASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS
ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS Debasis Kundu Department of Mathematics Indian Institute of Technology Kanpur Kanpur, Pin 20806 India Abstract:
More informationThe properties of L p -GMM estimators
The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion
More informationEIE6207: Estimation Theory
EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.
More informationThe Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models
The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models Herman J. Bierens Pennsylvania State University September 16, 2005 1. The uniform weak
More informationEstimating Periodic Signals
Department of Mathematics & Statistics Indian Institute of Technology Kanpur Most of this talk has been taken from the book Statistical Signal Processing, by D. Kundu and S. Nandi. August 26, 2012 Outline
More information8 STOCHASTIC SIMULATION
8 STOCHASTIC SIMULATIO 59 8 STOCHASTIC SIMULATIO Whereas in optimization we seek a set of parameters x to minimize a cost, or to maximize a reward function J( x), here we pose a related but different question.
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationEconomics 583: Econometric Theory I A Primer on Asymptotics
Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:
More informationA Comparison of Robust Estimators Based on Two Types of Trimming
Submitted to the Bernoulli A Comparison of Robust Estimators Based on Two Types of Trimming SUBHRA SANKAR DHAR 1, and PROBAL CHAUDHURI 1, 1 Theoretical Statistics and Mathematics Unit, Indian Statistical
More informationClosest Moment Estimation under General Conditions
Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest
More informationMATH 31B: MIDTERM 2 REVIEW. sin 2 x = 1 cos(2x) dx = x 2 sin(2x) 4. + C = x 2. dx = x sin(2x) + C = x sin x cos x
MATH 3B: MIDTERM REVIEW JOE HUGHES. Evaluate sin x and cos x. Solution: Recall the identities cos x = + cos(x) Using these formulas gives cos(x) sin x =. Trigonometric Integrals = x sin(x) sin x = cos(x)
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationStatistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That
Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval
More informationECE 275B Homework # 1 Solutions Version Winter 2015
ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2
More informationStatistical Properties of Numerical Derivatives
Statistical Properties of Numerical Derivatives Han Hong, Aprajit Mahajan, and Denis Nekipelov Stanford University and UC Berkeley November 2010 1 / 63 Motivation Introduction Many models have objective
More informationAsymptotics of minimax stochastic programs
Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming
More informationLECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).
CE 025 - Lecture 6 LECTURE 6 GAUSS QUADRATURE In general for ewton-cotes (equispaced interpolation points/ data points/ integration points/ nodes). x E x S fx dx hw' o f o + w' f + + w' f + E 84 f 0 f
More informationClosest Moment Estimation under General Conditions
Closest Moment Estimation under General Conditions Chirok Han and Robert de Jong January 28, 2002 Abstract This paper considers Closest Moment (CM) estimation with a general distance function, and avoids
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationSubmitted to the Brazilian Journal of Probability and Statistics
Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a
More informationScientific Computing
2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation
More information460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses
Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:
More informationBrief Review on Estimation Theory
Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on
More informationBayes Estimation and Prediction of the Two-Parameter Gamma Distribution
Bayes Estimation and Prediction of the Two-Parameter Gamma Distribution Biswabrata Pradhan & Debasis Kundu Abstract In this article the Bayes estimates of two-parameter gamma distribution is considered.
More informationECE 275B Homework # 1 Solutions Winter 2018
ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationA New Two Sample Type-II Progressive Censoring Scheme
A New Two Sample Type-II Progressive Censoring Scheme arxiv:609.05805v [stat.me] 9 Sep 206 Shuvashree Mondal, Debasis Kundu Abstract Progressive censoring scheme has received considerable attention in
More informationMeasure and Integration: Solutions of CW2
Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost
More informationNonparametric Modal Regression
Nonparametric Modal Regression Summary In this article, we propose a new nonparametric modal regression model, which aims to estimate the mode of the conditional density of Y given predictors X. The nonparametric
More informationBickel Rosenblatt test
University of Latvia 28.05.2011. A classical Let X 1,..., X n be i.i.d. random variables with a continuous probability density function f. Consider a simple hypothesis H 0 : f = f 0 with a significance
More informationOn the Comparison of Fisher Information of the Weibull and GE Distributions
On the Comparison of Fisher Information of the Weibull and GE Distributions Rameshwar D. Gupta Debasis Kundu Abstract In this paper we consider the Fisher information matrices of the generalized exponential
More informationGaussian Estimation under Attack Uncertainty
Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary
More informationEstimation for generalized half logistic distribution based on records
Journal of the Korean Data & Information Science Society 202, 236, 249 257 http://dx.doi.org/0.7465/jkdi.202.23.6.249 한국데이터정보과학회지 Estimation for generalized half logistic distribution based on records
More informationAsymptotic inference for a nonstationary double ar(1) model
Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk
More informationAGEC 661 Note Eleven Ximing Wu. Exponential regression model: m (x, θ) = exp (xθ) for y 0
AGEC 661 ote Eleven Ximing Wu M-estimator So far we ve focused on linear models, where the estimators have a closed form solution. If the population model is nonlinear, the estimators often do not have
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationEstimation of parametric functions in Downton s bivariate exponential distribution
Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr
More informationLIST OF PUBLICATIONS
LIST OF PUBLICATIONS Papers in referred journals [1] Estimating the ratio of smaller and larger of two uniform scale parameters, Amit Mitra, Debasis Kundu, I.D. Dhariyal and N.Misra, Journal of Statistical
More informationWorking Paper No Maximum score type estimators
Warsaw School of Economics Institute of Econometrics Department of Applied Econometrics Department of Applied Econometrics Working Papers Warsaw School of Economics Al. iepodleglosci 64 02-554 Warszawa,
More information2.4 Lecture 7: Exponential and trigonometric
154 CHAPTER. CHAPTER II.0 1 - - 1 1 -.0 - -.0 - - - - - - - - - - 1 - - - - - -.0 - Figure.9: generalized elliptical domains; figures are shown for ǫ = 1, ǫ = 0.8, eps = 0.6, ǫ = 0.4, and ǫ = 0 the case
More informationNonconcave Penalized Likelihood with A Diverging Number of Parameters
Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized
More informationA PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS
Statistica Sinica 20 2010, 365-378 A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Liang Peng Georgia Institute of Technology Abstract: Estimating tail dependence functions is important for applications
More informationASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF THE PARAMETERS OF THE CHIRP SIGNALS
Ann. Inst. Statist. Math. Vol. 56, o. 3, 529 544 (2004) (~)2004 The Institute of Statistical Mathematics ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF THE PARAMETERS OF THE CHIRP SIGALS SWAGATA
More informationIntroduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak
Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,
More informationE X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.
E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,
More informationParametric Techniques Lecture 3
Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to
More informationPreliminary Examination in Numerical Analysis
Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify
More informationAn Empirical Characteristic Function Approach to Selecting a Transformation to Normality
Communications for Statistical Applications and Methods 014, Vol. 1, No. 3, 13 4 DOI: http://dx.doi.org/10.5351/csam.014.1.3.13 ISSN 87-7843 An Empirical Characteristic Function Approach to Selecting a
More informationA comparison of inverse transform and composition methods of data simulation from the Lindley distribution
Communications for Statistical Applications and Methods 2016, Vol. 23, No. 6, 517 529 http://dx.doi.org/10.5351/csam.2016.23.6.517 Print ISSN 2287-7843 / Online ISSN 2383-4757 A comparison of inverse transform
More informationSensitivity and Asymptotic Error Theory
Sensitivity and Asymptotic Error Theory H.T. Banks and Marie Davidian MA-ST 810 Fall, 2009 North Carolina State University Raleigh, NC 27695 Center for Quantitative Sciences in Biomedicine North Carolina
More informationA Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets
A Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets George A. Hagedorn Happy 60 th birthday, Mr. Fritz! Abstract. Although real, normalized Gaussian wave packets minimize the product
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationAnalysis of Gamma and Weibull Lifetime Data under a General Censoring Scheme and in the presence of Covariates
Communications in Statistics - Theory and Methods ISSN: 0361-0926 (Print) 1532-415X (Online) Journal homepage: http://www.tandfonline.com/loi/lsta20 Analysis of Gamma and Weibull Lifetime Data under a
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationEXACT MAXIMUM LIKELIHOOD ESTIMATION FOR NON-GAUSSIAN MOVING AVERAGES
Statistica Sinica 19 (2009), 545-560 EXACT MAXIMUM LIKELIHOOD ESTIMATION FOR NON-GAUSSIAN MOVING AVERAGES Nan-Jung Hsu and F. Jay Breidt National Tsing-Hua University and Colorado State University Abstract:
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationExact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring
Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme
More information6.1 Variational representation of f-divergences
ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 6: Variational representation, HCR and CR lower bounds Lecturer: Yihong Wu Scribe: Georgios Rovatsos, Feb 11, 2016
More informationSome Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution
Journal of Probability and Statistical Science 14(), 11-4, Aug 016 Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution Teerawat Simmachan
More informationMMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only
MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationSingle Index Quantile Regression for Heteroscedastic Data
Single Index Quantile Regression for Heteroscedastic Data E. Christou M. G. Akritas Department of Statistics The Pennsylvania State University SMAC, November 6, 2015 E. Christou, M. G. Akritas (PSU) SIQR
More informationDensity estimators for the convolution of discrete and continuous random variables
Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationAUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET. Questions AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET
The Problem Identification of Linear and onlinear Dynamical Systems Theme : Curve Fitting Division of Automatic Control Linköping University Sweden Data from Gripen Questions How do the control surface
More informationLocal Polynomial Regression
VI Local Polynomial Regression (1) Global polynomial regression We observe random pairs (X 1, Y 1 ),, (X n, Y n ) where (X 1, Y 1 ),, (X n, Y n ) iid (X, Y ). We want to estimate m(x) = E(Y X = x) based
More informationNon-parametric Inference and Resampling
Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing
More informationEconomics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011
Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Section 2.6 (cont.) Properties of Real Functions Here we first study properties of functions from R to R, making use of the additional structure
More informationChapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program
Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools Joan Llull Microeconometrics IDEA PhD Program Maximum Likelihood Chapter 1. A Brief Review of Maximum Likelihood, GMM, and Numerical
More informationFree Probability, Sample Covariance Matrices and Stochastic Eigen-Inference
Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationBahadur representations for bootstrap quantiles 1
Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by
More information10. Linear Models and Maximum Likelihood Estimation
10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that
More informationinferences on stress-strength reliability from lindley distributions
inferences on stress-strength reliability from lindley distributions D.K. Al-Mutairi, M.E. Ghitany & Debasis Kundu Abstract This paper deals with the estimation of the stress-strength parameter R = P (Y
More informationECE531 Lecture 10b: Maximum Likelihood Estimation
ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So
More informationEconomics 241B Review of Limit Theorems for Sequences of Random Variables
Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence
More informationCS 195-5: Machine Learning Problem Set 1
CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of
More informationELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)
1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)
More informationSTAT 7032 Probability Spring Wlodek Bryc
STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,
More informationChapter 3. Point Estimation. 3.1 Introduction
Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationROBUST MINIMUM DISTANCE NEYMAN-PEARSON DETECTION OF A WEAK SIGNAL IN NON-GAUSSIAN NOISE
17th European Signal Processing Conference EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 ROBUST MIIMUM DISTACE EYMA-PEARSO DETECTIO OF A WEAK SIGAL I O-GAUSSIA OISE Georgy Shevlyakov, Kyungmin Lee,
More informationParametric Techniques
Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More information