ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS

Size: px
Start display at page:

Download "ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS"

Transcription

1 ASYMPTOTIC PROPERTIES OF THE LEAST SQUARES ESTIMATORS OF MULTIDIMENSIONAL EXPONENTIAL SIGNALS Debasis Kundu Department of Mathematics Indian Institute of Technology Kanpur Kanpur, Pin India Abstract: In this paper we investigate the theoretical properties of the least squares estimators of the multidimensional exponential signals under the assumptions of additive errors The strong consistency and asymptotic normality of the least squares estimators of the different parameters are obtained We discuss two particular cases in details It is observed that several one or two dimensional results follow from our results Finally we discuss some open problems AMS Subject Classifications: 62F2, 62M0 Keywords and Phrases: Multidimensional exponential signals, almost sure convergence, Fisher Information matrix Short Running Title: Multidimensional Exponential Signals

2 INTRODUCTION: There has been intensive studies recently on estimation of the parameters of multidimensional exponential signals In many applications, such as synthetic aperture, radar imaging, frequency and wave number estimation in array processing and nuclear magnetic resonance imaging, it is often desired to estimate multidimensional frequencies from multidimensional data For recent literature on the topic of multidimensional exponential signal parameters estimation, the reference may be made to the works of Bose (985), McClellan (982), Barbieri and Barone (992), Cabrera and Bose (993), Chun and Bose (995), Dudgeon and Merserasan (984), Hua (992), Kay (980), Kay and Nakovei (990) and Lang and McClellan (982) The multidimensional (M-D) superimposed exponential signal model in additive white noise, in its most general form can be written as follows: P P M y(n,, n M ) =,, A j j M e i(n ω,j ++n M ω M,jM ) + e(n,, n M ) () j = j M = for n =,, N,, n M =,, N M Here y(n,, n M ) s are the observed noise corrupted signals, i =, ω,,, ω M,PM (0, 2π) are the unknown frequencies A,, A P P M are the unknown amplitudes and they can be complex also It is assumed that the noise random variables e(n,, n M ) s are independent and identically distributed (iid) random variables with zero means and finite variances The problem is to estimate the unknown parameters assuming P,, P M are known a priori It may be mentioned that different particular cases of this model are well studied in signal processing and time series literature For example, when M =, the model can be written as y(n) = P A j e iωjn + e(n) (2) j= This is a very well discussed model in statistical signal processing, see for example the review articles of Rao (988) and Prasad, Chakraborty and Parthasarathy (995) The interested 2

3 readers may consult Stoica (993) for an extensive list of references up to that time and Kundu and Mitra (999) for some recent works The corresponding real version of this model (2) takes the form P y(t) = (A j cos(ω j t) + B j sin(ω j t)) + e(t) (3) j= The model (3) is a very well discussed model in time series analysis, particularly when e(t) s are from a stationary sequence, see for example Hannan (97, 973), Walker (97), Rife and Boorstyn (974), Kundu (993a, 997), Kundu and Mitra (996, 997) and Quinn (999) and the references cited there It plays an important role in analyzing many non stationary time series data When M = 2, the model () can be written as y(n, n 2 ) = P P 2 j = j 2 = One particular case of the model (4) is A j j 2 e i(n ω j +n 2 ω 2j2 ) + e(n, n 2 ) (4) P y(n, n 2 ) = A j e i(n ω j +n 2 ω 2j ) + e(n, n 2 ) (5) j= Both (4) and (5) have wide variety of applications in Statistical Signal Processing see Rao et al (994, 996) Different estimation procedures and the properties of these estimators are studied quite extensively by Cabrera and Bose (993), Chun and Bose (995), Kay and Nekovei (990), Rao et al (994, 996) and Kundu and Mitra (996) Recently it is observed that the following two dimensional model, P y(m, n) = A k cos(mλ k + nµ k ) + e(m, n), (6) k= can be used quite effectively in texture classifications See for example the works of Manderekar and Zhang (995) and Francos et al (990) Different estimation procedures and their properties are studied by Manderakar and Zhang (995), Kundu and Gupta (998) and Nandi and Kundu (999) 3

4 In this paper we consider the most general model (), and therefore all the other models mentioned in (2), (3), (4), (5) and (6) are particular cases of this general model In this paper, we consider the least squares estimators of the different parameters and study their asymptotic properties We mainly consider the consistency and the asymptotic normality properties of the unknown parameters We obtain an explicit expression of the asymptotic distribution of the least squares estimators It is observed that the asymptotic variance covariance matrix coincides with the Fisher Information matrix even in the general situation as it was observed by Rao and Zhao (993) and Rao et al (994) for one and two dimensional cases respectively under the assumptions of normality of the error random variables We obtain the consistency of the least squares estimators of the general model Although it seems that the asymptotic normality results should follow along the same line as Rao et al (994) or Kundu and Mitra (995), but obtaining the exact asymptotic variance covariance matrix may not be a trivial task Looking at the variance covariance matrix of the 2-D case (Rao et al; 994 or Kundu and Mitra; 996) it becomes immediate that the exact variance covariance matrix may not be in a simple form for the general case It is observed that if we arrange the parameters in a different way, then for some special cases the asymptotic variance covariance matrix has a more convenient form We provide some special cases separately because these cases have some practical applications and these results may not be obtained that easily from the general case The rest of the paper is organized as follows In section 2, we provide two special cases and in section 3, we provide the general form We consider the estimation of the error variance in Section 4 Finally we draw conclusions from our results in section 5 From now on we will be using the following notations N = N N M, n = (N,, N M ) and N () = min{n,, N M }, N (M) = max{n,, N M } and almost sure convergence will be 4

5 denoted by as The constant C may indicate different constant at different places The parameter vector θ and the matrix D might be different at different places, but they will be defined properly at each place In theorems 2, 4, 6 and 8, the dispersion matrices are evaluated at the true parameter values We have not make it explicit for brevity It should be clear from the context 2 TWO SPECIAL CASES: In this section we consider two special cases One is the real valued multidimensional sinusoidal model (Model ) and the other one is the multidimensional complex valued sum of exponential model (Model 2) Model : P y(n) = A k cos(n ω k + + n M ω Mk + φ k ) + e(n) (2) k= Here A k s are arbitrary real numbers, ω k ( π, π), ω 2k,, ω Mk, φ k (0, π) Here e(n) is a M-dimensional sequence of iid random variables with zero means and finite variances and P, is assumed to be known The problem is to estimate the unknown parameters A k s and ω jk s for j =,, M and k =,, P The least squares estimators can be obtained by minimizing ( N N 2 M P Q(θ) = y(n) A k cos(n ω k + + n M ω Mk + φ k )) (22) n = n M = k= with respect to the unknown parameters We use the following notations after rearranging the parameters θ = (A, φ, ω,, ω M ),, θ p = (A P, φ P, ω P,, ω MP ) θ = (θ,, θ P ) 5

6 Let s consider the following assumptions: Assumption : Let A,, A P be arbitrary real numbers, not any one of them is identically equal to zero, ω k ( π, π) and φ k, ω jk (0, π) for k =,, P and j = 2, M, and they are all distinct Assumption 2: Let {e(n)} be a iid sequence of M array real valued random variables with E(n) = 0 and V ar(e(n)) = σ 2 Theorem : Under the assumptions and 2 and as N (M), the least squares estimators of the parameters of the model (2) which are obtained by minimizing (22) are strongly consistent estimators of the corresponding parameters We need the following lemmas to prove the above results Lemma : Let {e(n)} be a iid sequence of M-dimensional random variables with mean zero and finite variance, then lim sup N (M) α,,α M N N n = n M = e(n)cos(n α ) cos(n M α M ) 0 as (23) Proof of Lemma : See Appendix A Consider the following set S δ,t = {θ = (θ,, θ M ); A A 0 > δ, θ < T or A M A 0 M > δ, θ < T or ω ω 0 > δ, θ < T, or ω P ω 0 P > δ, θ < T or, ω M ω 0 M > δ, θ < T, or ω MP ω 0 MP > δ, θ < T } Here A 0,, A 0 M, ω 0,, ω 0 MP are the true value of the parameters 6

7 Lemma 2: If lim inf inf N (M) θ S δ,t N (Q(θ) Q(θ0 )) > 0 as for all δ > 0, then ˆθ, the least squares estimator of θ 0, is a strongly consistent estimator of θ 0 Proof of Lemma 2: The proof is simple see in Appendix C Proof of Theorem : Expanding the square term of Q(θ) and using lemma, it can be shown that lemma 2 is satisfied Therefore the result follows See in Appendix C The more important question is what is the asymptotic distribution of the least squares estimator ˆθ of θ as N () We use the matrix D as a P (M + 2) diagonal matrix with diagonal elements as D = diag{n 2, N 2, N N 2,, NM N 2,, N 2, N 2, N N 2,, NM N 2 } The following theorem provides the asymptotic distribution of ˆθ Theorem 2: Under the assumptions and 2 as N (), the limiting distribution of (ˆθ θ)d is a P (M + 2) variate normal with mean zero and dispersion matrix 2σ 2 Σ, where Here Σ i and Σ i Σ 0 0 Σ = Σ P are both (M + 2) (M + 2) matrices and 7

8 A 2 i 2 A2 i 0 2 Σ i = A2 i 3 A2 i 4 A2 i 0 2 A2 i 4 A2 i 3 A2 i 0 2 A2 i 4 A2 i 4 A2 i 3 A2 i a i = 2 A2 i 4 A2 i 4 A2 i Σ i = a i b i b i b i 0 b i c i b i 0 c i 0 0 b i 0 0 c i 3(M ) + 4, b A 2 = 6, c i A 2 i = 36(M ) + 48 i A 2 i 3(M ) + 4 Proof of Theorem 2: The main idea is to expand the first derivative vector of Q(θ) by Taylor s series and use the proper normalizing constant to obtain the limiting normal distribution For each j =,, P (M + 2), 0 = Q(ˆθ) θ j = Q(θ0 ) θ j + (ˆθ θ 0 ) 2 Q( θ j ) θ j θ T, (24) where θ j lies on the line segment between θ 0 and ˆθ Now let s define the P (M +2) vector Q (θ 0 ) whose j th column is Q(θ0 ) θ j and the P (M +2) P (M +2) matrix Q ( θ,, θ P (M+2) ) whose j th column is 2 Q( θ j ) θ j θ T Therefore, from (24) we obtain, (ˆθ θ 0 ) = Q (θ 0 )[Q ( θ,, θ P (M+2) )] For notational simplicity let s define Q ( θ) = Q ( θ,, θ P (M+2) ) Therefore, (ˆθ θ 0 )D = Q (θ 0 )D [D Q ( θ)d ] It can be shown (see in Appendix C) that Q (θ 0 )D converges in distribution to a P (M +2) variate normal distribution with mean zero and dispersion matrix 2σ 2 Σ, using the central limit theorem and the results of the following types, lim n n n t= sin 2 (βt) = 2 and n lim n 2 n tsin 2 (βt) = t= 4 for β 0 (see Mangulis; 965) Similarly it can be shown that D Q ( θ)d converges almost surely to Σ Therefore the result follows 8

9 Comments: Because of the rearrangement of the parameters, the M-dimensional result becomes more convenient than the two dimensional result Also when M = 2, the result matches with that of Kundu and Gupta (998) It is also interesting to note that the vectors ˆθ i and ˆθ j are asymptotically independent Also the asymptotic variances of ˆω i,, ˆω Mi are all same and they are inversely proportional to A 2 i The asymptotic covariance between ˆω ji and ˆω ki is zero and as M increases, the variances of both ˆω ji and ˆω ki converge to 2 A 2 i Now we describe another particular model: Model 2: P y(n) = A k e i(n ω k ++n M ω Mk ) + e(n) (25) k= This model has its special importance in statistical signal processing For M = 2, this model coincides with the model (5) Here A k s are arbitrary complex numbers and ω jk ( π, π) and they are all distinct The error e(n) is a M-dimensional complex valued random variable with mean zero and finite variance The least squares estimators are obtained by minimizing Q(θ) = N n = N M n M = P y(n) A k e i(n ω k ++n M ω nm ) Similarly as Assumptions and 2, we use Assumptions 3 and 4 for the complex case k= 2 Assumption 3: Let A,, A P be arbitrary complex numbers, not anyone of them is identically equal to zero, ω jk ( π, π) for j =,, M, k =,, P and they are all distinct Assumption 4: Let {e(n,, n M )} be a iid sequence of complex valued random variables, with E(e(n)) = 0, V ar(re{e(n)}) = V ar(im{e(n)}) = σ2 2 The Re(e(n)) and Im(e(n)) are independently distributed consistency results: Now we state the following 9

10 Theorem 3: Under the assumptions 3 and 4 as N (M), the least squares estimators of the parameters of the model (25) are strongly consistent estimators of the unknown parameters Proof of Theorem 3: The proof can be obtained similarly as theorem (see also the proof of theorem of Kundu and Mitra; 996), therefore it is omitted To obtain the asymptotic distribution of the least squares estimators, we rearrange the parameters and use the following notations: θ = (A R, A I, ω,, ω M ), θ P = (A P R, A P I, ω P,, ω MP ), θ = (θ,, θ P ) A kr, A ki denote the real and imaginary part of A k for k =, P We use ˆθ for the least squares estimator of θ Now we use D as a P (M + 2) diagonal matrix with the diagonal elements as follows: D = diag{n 2, N 2, N N 2,, NM N 2,, N 2, N 2, N N 2, NM N 2 } Similarly as theorem 2, we have the following result for the asymptotic distributions of the least squares estimators Theorem 4: Under assumptions 3 and 4 as N (), the limiting distribution of ( ˆθ θ)d is a P (M + 2) variate complex normal distribution with mean zero and dispersion matrix σ 2 Σ, where Here Σ j and Σ j Σ 0 0 Σ = Σ P are M + 2 M + 2 matrices as follows: 0

11 and Here 2 0 A ji A ji 0 2 A jr A jr 2 Σ j = A ji A jr A 3 j 2 A 2 j 2 A 2 j 2 A ji A jr A 2 j 2 A 2 j 2 2 A 3 j 2 a = A 2 ji a b d d d b c e e e Σ j = d e f f d e 0 f A j 2 M, b = 3 2 M A jia jr A j 2, c = M A2 jr A j 2 d = 3A ji A j 2, e = 3A jr A j 2, f = 6 A j 2 Proof of Theorem 4: The proof follows quite similarly as Kundu and Mitra (996) and therefore it is omitted The important point is to rearrange the parameters in the specified manner then the asymptotic dispersion matrix can be expressed in a convenient form Comments: Note that the results of Rao and Zhao (993) and Kundu and Mitra (999) follow from theorems 3 and 4 It is interesting to see that all the frequency estimators are asymptotically independent and the asymptotic variances of the frequency estimators are inversely proportional to the corresponding amplitude estimators The asymptotic variances of the frequency estimators are independent of M, whereas the asymptotic variances of the amplitude estimators are directly proportional to M Therefore, as M increases the asymptotic variances of the amplitude estimators increase Interestingly if the real part or the imaginary part of the amplitude is zero then the asymptotic variance of the corresponding imaginary part or the real part of the amplitude estimator is independent of M 3 M-DIMENSIONAL CASE:

12 In this section we present the asymptotic distribution of the least squares estimators of the general M-dimensional model (), under the following assumptions Assumption 5: { ω,,, ω,p,, ω M,,, ω M,PM } are all distinct and they lie between (0, 2π) Assumption 6: A j 2 = P 2 j 2 = P M jm = A j j M 2 > 0 Similarly A j2 2, A jm 2 are defined A j2 2 > 0 A jm 2 > 0 for j =,, P j M =,, P M We use the following notations: ω = (ω,,, ω,p ),, ω M = (ω M,,, ω M,PM ) A = (A,, A P P M ), θ = (ω,, ω M, Re(A), Im(A)) Note that the LSE s of θ of the model () can be obtained by minimizing Q(θ) = N n = N M n M = P y(n) j = P M j M = A j j M e i(n ω,j ++n M ω M,jM ) 2 (3) We define the matrices C T j,m+ = C M+,j = C T j,m+2 = C M+2,j of order P P M P j as follows Label the P P M rows of the matrix C T j,m+ as ( ), ( 2),, (P P M ) respectively and the P j columns as,, P j Then the k th (=,, P j ) column of C T j,m+ has non zero entries at the rows (,,,, k,,, ), (,, k,,,, 2),, (P }{{}}{{}}{{}}{{},, P j, k, P j+,, P M ) j M j j M j and they are A,,,k,,,, A,,,k,,,,2,, A P,,P j,k,p j+,,p M respectively For better understanding we provide C T,M+, C T 2,M+ and C T M,M+ in the Appendix B The matrices Σ and Σ are of the order (P + + P M + 2P P M ) (P + + P M + 2P P M ) and are defined as follows: 2

13 Σ Σ,M+2 Σ = Σ M+2, Σ M+2,M+2 Σ = Σ Σ,M+2 Σ M+2, Σ M+2,M+2 where Σ ii and Σ ii are both P i P i matrices for i =, M and P P M P P M matrices for i = M +, M + 2 The elements of Σ ij and Σ ij are defines as follows: Σ = 2 3 diag{ A 2,, A P 2 } = Σ MM = 2 3 diag{ A 2,, A PM 2 } Σ 2 = ((σ 2 ij )) = Σ T 2, σ2 ij = 2 P 3 j 3 = P M j M = A ijj3 j M 2 = Σ M,M = ((σ M,M ij )) = Σ T M,M, σm,m ij = 2 P j = P M 2 j M 2 = A j j M 2 ij 2 Σ T j,m+ = Σ M+,j = Im(C T j,m+ ), ΣT j,m+2 = Σ M+2,j = Re(C T j,m+2 ), Σ M+,M+ = Σ M+2,M+2 = 2I P P M, Σ M+,M+2 = Σ M+2,M+ = 0 Σ = 6 diag{ A 2,, A P 2 } = Σ MM = 6 diag{ A 2,, A PM 2 } Σ i,m+ = ( ) Σ T M+,i = Σ i,m+2 = ( Σ M+2,i 2 Im ( ) Σ ii C i,m+ ) T = 2 Re ( ) Σ ii C i,m+2 for i =,, M Σ M+,M+ = 4 ( Im(CM+, )Σ Im(C,M+ ) + + Im(C M+,M )Σ MM Im(C M,M+ ) ) 3

14 + 2 I Σ M+2,M+2 = ( Re(CM+, )Σ Re(C,M+ ) + + Re(C M+,M )Σ MM Re(C M,M+ ) ) I Σ M+,M+2 = ( Im(CM+, )Σ Re(C,M+2 ) + + Im(C M+,M )Σ MM Re(C M,M+2 ) ) 4 = Σ (M+2,M+)T and the remaining Σ ij matrices are zero matrices Now we can state the general result as follows Theorem 5: Under the assumptions 4, 5 and 6 as N (M), the least squares estimators of θ of the model () is strongly consistent Proof of Theorem 5: The detailed proof for M = 2 when N () is provided in Kundu and Mitra (996, Theorem ) Note that in Kundu and Mitra (996) it is assumed that the fourth order moments are finite and N () where as here Lemma is proved using the finiteness of the second order moments and as N (M) For general M the idea of the proof is exactly the same and it follows using Lemma and Lemma 2 If we denote the diagonal matrix D as D = diag{n N 2,, N N 2 }{{} P,, N M N 2,, NM N 2 }{{} P M with N = N N M Then we have the following result, N 2,, N 2 }{{} 2P P M } Theorem 6: Under the assumptions 4, 5 and 6 as N (), the asymptotic distribution of (ˆθ θ)d is asymptotically (P + + P M + 2P P M ) variate normal distribution with mean zero and dispersion matrix σ 2 Σ, where Σ is as defined in this section before Proof of theorem 6: The main idea of the proof is quite simple It involves the routine 4

15 computation of the first and second derivative of Q(θ), where Q(θ) = N n = N M n M = P y(n) j = P M j M = A j j M e i(n ω,j ++n M ω M,jM ) Then expanding the first derivative of Q(θ) in terms of Taylor series and use the proper 2 normalizing constant to obtain the limiting normal distribution The crucial point is to obtain the matrix Σ and then the matrix Σ Comments: Note that the results of Bai et al (987), Rao et al (994), Rao and Zhao (993) and Kundu and Mitra (996, 999) follow from theorems 5 and 6 Theorems 5 and 6 generalize all the existing one or two dimensional results to their most general form 4 CONSISTENCY AND ASYMPTOTIC NORMALITY OF ˆσ 2 : In this section we state the consistency and the asymptotic normality results of ˆσ 2, an estimator of σ 2, obtained as follows; ˆσ 2 = N Q(ˆθ) (4) Here Q(θ) is same as defined in (3) We have the following results Theorem 7: If ˆθ is the LSE of θ 0 of the model () and the error random variables e(n) s satisfy assumption 4, then as N (M), ˆσ 2 is a strongly consistent estimator of σ 2 Proof of Theorem 7: Using lemma, the proof can be obtained similarly as the one dimensional case (see Kundu and Mitra; 999), so it is omitted Theorem 8: If ˆσ 2 is the estimator of σ 2 as defined in (4) and the error random variables e(n) s satisfy assumption 4 Furthermore if E[(Re{e(n)}) 4 ] < and E[(Im{e(n)}) 4 ] < 5

16 then N(ˆσ 2 σ 2 ) N(0, σ ) as N (), where σ = E[(Re{e(n)}) 4 ] + E[(Im{e(n)}) 4 ] σ4 2 Proof of Theorem 8: This proof is also quite similar to the proof for the one dimensional case, therefore it is not provided here The detailed proof can be obtained on request from the author 5 CONCLUSIONS: In this paper we discuss the asymptotic properties of the least squares estimators of the multidimensional exponential model It is observed that several particular cases mainly one or two dimensional results can be obtained as particular cases We consider two special cases of the general multidimensional model and obtain the asymptotic distribution It may be mentioned that other than the least squares estimators, it may be possible to consider the approximate least squares estimators as defined by Hannan (97) or Walker (97) The approximate least squares estimators of the frequencies can be obtained by maximizing the periodogram function It can be proved along the same line as the one dimensional case that the least squares estimators and the approximate least squares estimators are asymptotically equivalent Therefore, the asymptotic distribution of the ALSE s will be same as that of the LSE s in all the cases considered Note that in this paper we assume that the errors are independent and identically distributed Suppose that we have the following error structure X(n) instead of e(n) in (), X(n) = Q i = Q M i M = a i i M e(n i,, n M i M ) (5) here a i i M are unknown constants and Q,, Q M are known integers The error structure (5) indicates that the errors are of the moving average type in M dimensions The results 6

17 can be extended even when the errors are of the type (5), see for example Nandi and Kundu (999) for the two dimensional result in this context Now we would like to mention some open problems First of all the estimation of P,, P M is very important but difficult problem Even in the case of two dimensions, the problem is not yet resolved (see Rao et al (996)) satisfactorily May be some model selection criteria like AIC or BIC or their modifications can be used to resolve this problem Numerically obtaining the LSE s of the one or two dimensional problems are well known to be very difficult problems It will be important to find out some good estimation schemes for obtaining the LSE s of the unknown parameters for higher dimensional model Another important problem is to study the properties of the LSE s when the errors are from a stationary distribution not necessarily from a finite order moving average type If the errors are from a stationary random field then it will be important to derive the properties of the LSE s More work is needed in these directions Acknowledgments: Most of the work was done when the author was visiting the Pennsylvania State University as an visiting Associate Professor The author would like to thank Professor CR Rao for his valuable comments REFERENCES Bai, ZD, Chen, XR, Krishnaiah, PR and Zhao, LC, Wu, LC (987), Strong consistency of maximum likelihood parameter estimation of superimposed exponential signals in noise, Technical Report, Center for Multivariate Analysis, 87-7, University of Pittsburgh 7

18 Barbieri, MM and Barone, P (992), A two dimensional Prony s method for spectral estimation, IEEE Trans Signal Processing, vol 40, no, Bose, NK (985) Multidimensional Systems: Progress, Directions and Open Problems Cabrera, SD and Bose, NK (993), Prony methods for two dimensional complex exponential signals modeling, Applied Control, (Ed Tzafestas, SG), Chap 5, Decker, New York, 40-4 Chun, J and Bose, NK (995), Parameter estimation via signal selectivity of signal subspaces (PESS) and its applications to 2-D wavenumber estimation, Digital Signal Processing, vol 5, Dudgeon, DE and Merseresau, RM (984), Multidimensional Digital Signal Processing, Englrwood Cliffs, NJ, Prentice Hall Francos, JM, Mieri, AZ and Porat, B (990), On a Wold type decomposition of 2-D discrete random field, Proc International Conference of Acoustics, Speech and Signal Processing, Albuquerque Hannan, EJ (97), Nonlinear time series regression, Journal of Applied Probability, vol 8, Hannan, EJ (973), The estimation of frequencies, Journal of Applied Probability, vol 0, Hua, Y (992), Estimating two dimensional frequencies by matrix enhancement and matrix pencil, IEEE Trans Signal Processing, vol 40, no 9,

19 Jennrich, RI (969), Asymptotic properties of the nonlinear least squares estimation, Annals of mathematical Statistics, vol 40, Kay, S (980), Modern Spectral Estimation: Theory and Applications Englehood Cliffs Prentice Hall, NJ Kay, S and Nakovei, R (990), An efficient two dimensional frequency estimation, IEEE Trans Acoust Speech and Signal Processing, vol 38, no 0, Kundu, D (993a), Asymptotic theory of the least squares estimator of a particular nonlinear regression model, Statistics and Probability Letters, vol 8, 3-7 Kundu, D (993b), Estimating the parameters of the undamped exponential signals, Technometrics, vol 35, Kundu, D (997), Asymptotic properties of the least squares estimators of sinusoidal signals, Statistics, Vol 30, Kundu, D and Mitra, A (996), Asymptotic properties of the least squares estimates of 2-D exponential signals, Multidimensional Systems and Signal Processing, vol 7, no 2, Kundu, D and Mitra, A (997), Consistent method of estimating sinusoidal frequencies; a non iterative approach, Journal of Statistical Computations and Simulations, Vol 58, 4, Kundu, D and Mitra, A (999), On asymptotic behavior of least squares estimators and the confidence intervals of the superimposed exponential signals, Signal Processing, vol 9

20 72, Kundu, D and Gupta, RD (998), Asymptotic properties of the least squares estimators of a two dimensional model, Metrika, Vol 48, Lang, SW and McClellan, JH (982), The extension of Pisarenko method to multiple dimensions, em Proc ICASSP-82, Paris, France, Mandrekar, V and Zhang, H (995), Determination of the discrete spectrum of random fields with application to texture classifications, Technical Report, Michigan State University, USA Mangulis, V (965), Handbook of series for scientists and engineers, Academic Press, New York and London McClellan, J (982), Multidimensional Spectral Estimation, Proc IEEE, vol 70, no 9, Nandi, S and Kundu, D (999), Least squares estimators in a stationary random field, Journal of the Indian Institute of Science, vol 79, Mar-Apr, Prasad, S, Chakraborty, M and Parthasarathy, H (995), The role of statistics in signal processing- a brief review and some emerging trends, Indian Journal of Pure and Applied Mathematics, Vol 26, 6, Quinn, BG (999), A fast efficient technique for the estimation of frequency; interpretation and generalization, Biometrika, Vol 86,, Rao, CR (988), Some recent results in signal processing, Statistical Decision Theory 20

21 and Related Topics IV, Gupta, SS and Berger, JO (Eds), Vol 2, Springer, New York, Rao, CR and Zhao, LC (993), Asymptotic behavior of maximum likelihood estimates of superimposed exponential signals, IEEE Trans on Signal Processing, Vol 4, Rao, CR, Zhao, LC and Zhou, B (994), Maximum likelihood estimation of 2-D superimposed exponential signals, IEEE Trans Signal Processing, vol 42, no 7, Stoica, P (993), List of references on spectral line analysis, Signal Processing, Vol 3, 3, Rao, CR, Zhao, LC and Zhou, B (996), A novel algorithm for 2-Dimensional frequency estimation, Technical Report, Center for Multivariate Analysis, The Pennsylvania State University Walker, AM (97), On the estimation of a harmonic components in a time series with stationary independent residuals, Biometrika, vol 58, 2-36 Wu, CFJ (98), Asymptotic theory of nonlinear lest squares estimation, Annals of Statistics, vol 9, Appendix A First we prove the following lemma: Lemma 0: Let {e(n)} be a iid sequence of M-dimensional random variables with mean 2

22 zero and finite variance, then lim sup N () α,,α M N N n = n M = e(n)cos(n α ) cos(n M α M ) 0 (A) Proof of Lemma 0: Note that proving (A) is equivalent to prove N N M lim sup e(n)cos(n α ) cos(n M α M ) N α,,α M N 0 N M n = n M = as Consider the following random variables Z(n) = e(n) if e(n) (n n M ) 3 4 = 0 otherwise First we will show that Z(n) and e(n) are equivalent sequences Consider N n = n M = P {e(n) Z(n)} = N n = n M = P { e(n) (n n M ) 3 4 } Now observe that there are at most 2 k k M combinations of (n,, n M ) s such that n n M < 2 k Therefore, we have N n = k= k= C n M = P { e(n) (n n M ) 3 4 } P { e(n) r 3 4 } here [r = n n M ] 2 k r<2 k 2 k k M P { e(,, ) 2 (k ) 3 4 } k= 2 k k M 2 (k ) 3 2 k M C < k= 2 k 2 Here C is a constant and it may take different values at different places Therefore, e(n) and Z(n) are equivalent sequences So, P {e(n) Z(n) io} = 0 22

23 Here io means infinitely often Let U(n) = Z(n) E(Z(n)), then sup α,,α M N N M E(Z(n))cos(n α ) cos(n M α M ) N n = n M = N Since E(Z(n)) 0 as N (), therefore, N n = n M = E(Z(n)) N N n = n M = E(Z(n)) 0 as N () So, it is enough to prove that N sup U(n)cos(n α ) cos(n M α M ) α,,α M N 0 n = n M = as N () Now for any fixed ɛ > 0, π < α, β < π and 0 < h P N N n = N 2e hnɛ n = n M = 2N 3 4 U(n)cos(n α ) cos(n M α M ) ɛ N M n M = Ee h(u(n)) cos(n α )cos(n mα M ), we have Since hu(n)cos(n α ) cos(n M α M ), using 2 ex < + x + x 2 for x, we have 2 Choose h = 2N 3 4 P N N 2e hnɛ n = N M n M = Ee h(u(n)) cos(n α )cos(n mα M ) 2e hnɛ ( + h 2 σ 2 ) N, therefore for large N,, N M, N n = n M = U(n)cos(n α ) cos(n M α M ) ɛ Ce N Let K = N 2, choose K points β = (α,, α M ),, β K = (α K,, α MK ) in ( π, π) ( π, π) such that for each β = (β,, β M ) ( π, π) ( π, π), there exists a 4 ɛ 2 point β j satisfying β j β + + β Mj β M 2π N 2 23

24 Note that N N n = n M = U(n){cos(n α ) cos(n M α M ) cos(n α j ) cos(n M α Mj )} C N N n = n M = as N () Therefore, for large N,, N M, we have P N N n = P max j N 2 N N n = n M = N 3 4 N 2 (n + + n M ) 0 as U(n)cos(n α ) cos(n M α M ) 2ɛ n M = U(n)cos(n α j ) cos(n M α Mj ) ɛ CN 2 e N 4 ɛ 2 Since t= t 2 e t 4 < therefore as N (), sup α α M N U(n)cos(n α ) cos(n M α M ) N 0 as n = n M = Proof of Lemma : Note that N (M) goes if and only if k of the N i go to and M k of the N i are bounded for k =,, M For k = M, lemme reduces to lemma 0 We prove the result for k =,, M Consider any fixed k, and with out any loss of generality we can assume that N,, N k go to and N k+,, N M are bounded Therefore, lemma will be proved if we prove the following, for any bounded N k+,, N M, N lim sup e(n)cos(n α ) cos(n M α M ) 0 N α,,α M N as (A2) Since N k lim N N k sup α,,α k n = n M = N N k e(n)cos(n α ) cos(n k α k ) 0 as (A3) N N k n = n k = 24

25 (follows from lemma 0 by putting n = k), therefore, (A2) follows simply by taking limit inside the summation and repeatedly using (A3) Appendix B C T,M+ = A 0 0 A P2 P M A A 2P2 P M 0 A P A P P M 25

26 C T 2,M+ = A 0 0 A P3 P M 0 0 A 2 0 A 2P3 P M 0 0 A P2 0 0 A P2 P 3 P M A A 2P3 P M A A 22P3 P M A 2P2 0 0 A 2P2 P 3 P M A P 0 0 A P P 3 P M A P A P 2P 3 P M 0 0 A P P A P P 2 P M 26

27 C M,M+ T = A A A PM A A A 2PM A P P M A P P M A P P M 27

28 Appendix C Proof of Lemma 2: In this proof only we write ˆθ N N M as the least squares estimator of θ 0 and Q(θ) as defined in (22) as Q N N M (θ) Suppose ˆθ N N M is not consistent, therefore either Case I: For all subsequences (N k,, N Mk ) of (N,, N M ), ˆθ N N M with a positive probability or Case II: There exists a δ > 0 and a T < and a subsequence (N k,, N Mk ) of (N,, N M ), such that ˆθ Nk,,N Mk S δ,t, for all k =,2, Now note that N k N Mk Q Nk,,N Mk (ˆθ Nk,,N Mk ) N k N Mk Q Nk,,N Mk (θ 0 ) as (A3) as ˆθ Nk,,N Mk is the least squares estimator of θ 0, when N = N k,, N M = N Mk from the strong law of large numbers and using the similar argument as of the proof of lemma, it easily follows that lim N (M) N Q N,,N M (θ 0 ) σ 2 where σ 2 = Var(e(n)) In both the cases under the definition of Q Nk,,N Mk (θ) (see (22)) and because of the assumption of Lemma 2, we get that there exists a M 0 > 0 such that for all k M 0, N k N Mk Q Nk,,N Mk (ˆθ Nk,,N Mk ) > as N k N Mk Q Nk,,N Mk (θ 0 ), with a positive probability This contradicts (A3) It proves the lemma Proof of Theorem : Note that N Q(θ) = N N n = n M = e(n)

29 N 2 N ( N N M P ) 2 P A 0 kcos(n ωk n M ωmk) 0 A k cos(n ω k + + n M ω Mk ) n = n M = k= k= ( N N M P ) P e(n) A 0 kcos(n ωk n M ωmk) 0 A k cos(n ω k + + n M ω Mk ) n = n M = k= k= (A32) Now observe that as N (M), the first term on the right hand side of (A32) converges to σ 2 as because of the strong law of large number and the third term converges to zero because of Lemma Therefore due to Lemma 2, to prove the consistency of the least squares estimator it is enough to prove where g(θ, θ 0 ) = N lim inf N (M) inf g(θ, θ 0 ) > 0 as, (A33) θ S δ,t ( N N M P 2 P A 0 kcos(n ωk n M ωmk) 0 A k cos(n ω k + + n M ω Mk )) n = n M = k= k= Consider the following sets A δ = {θ : (θ,, θ M ); A A 0 δ, θ T } A Mδ = {θ : (θ,, θ M ); A M A 0 M δ, θ T } ω δ = {θ : (θ,, θ M ); ω ω 0 δ, θ T } ω Mδ = {θ : (θ,, θ M ); ω M ω 0 M δ, θ T } ω P δ = {θ : (θ,, θ M ); ω P ω 0 P δ, θ T } ω P Mδ = {θ : (θ,, θ M ); ω P M ω 0 P M δ, θ T } 29

30 (A34) Since S δ,t is the union of all the sets defined above in (A34), therefore to prove (A33), it is enough to prove lim inf inf g(θ, θ 0 ) > 0 as (A35) N (M) θ V where V is any one of the set defined in (A34) We show (A35) for V = A δ, lim inf N (M) inf g(θ, θ 0 ) = (A A 0 ) 2 lim θ A δ δ 2 lim N (M) N N (M) N n = N N n = n M = n M = For other V, it follows along the same line It proves Theorem cos 2 (n ω 0 k + + n M ω 0 Mk) cos 2 (n ω 0 k + + n M ω 0 Mk) > 0 The proof that Q (θ 0 )D converges in distribution to a P (M +2) variate normal distribution with mean zero and dispersion matrix 2σ 2 Σ (page 6) The different elements of Q (θ 0 )D can be written as follows: Q(θ 0 ) = 2 A k Q(θ 0 ) = 2 ω k Q(θ 0 ) = 2 ω Mk N n = N n = N n = n M = n M = n M = e(n)cos(n ω 0 k + + n M ω 0 Mk) e(n)a 0 kn sin(n ω 0 k + + n M ω 0 Mk) e(n)a 0 kn M sin(n ω 0 k + + n M ω 0 Mk) All the elements of Q (θ 0 ) satisfy the Lindeberg- Feller condition (Chung, KL; 978, A Course in Probability Theory) Therefore, Q (θ 0 ), with proper normalization will converge to a multivariate normal distribution Now let s look at the asymptotic covariance matrix of Q (θ 0 ) Using the results lim n n n t= sin 2 (βt) = 2 and n lim n 2 n tsin 2 (βt) = t= 4 30

31 lim n n n t= cos 2 (βt) = 2 and we have for i, j =, M and for k =, P, n lim n 2 n tcos 2 (βt) = t= 4, ( ) Q(θ 0 { } N cov, Q(θ0 0 if i j A i A j 2σ 2 if i = j ( ) Q(θ 0 N i N j N cov, Q(θ0 ω ik ω jk { 2 } 3 σ2 A 02 k if i = j 2 σ2 A 02 k if i j 3

Asymptotic properties of the least squares estimators of a two dimensional model

Asymptotic properties of the least squares estimators of a two dimensional model Metrika (998) 48: 83±97 > Springer-Verlag 998 Asymptotic properties of the least squares estimators of a two dimensional model Debasis Kundu,*, Rameshwar D. Gupta,** Department of Mathematics, Indian Institute

More information

ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL

ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL 1 ESTIMATION OF PARAMETERS OF PARTIALLY SINUSOIDAL FREQUENCY MODEL SWAGATA NANDI 1 AND DEBASIS KUNDU Abstract. In this paper, we propose a modification of the multiple sinusoidal model such that periodic

More information

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE DEBASIS KUNDU AND SWAGATA NANDI Abstract. The problem of parameter estimation of the chirp signals in presence of stationary noise

More information

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE

PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Statistica Sinica 8(008), 87-0 PARAMETER ESTIMATION OF CHIRP SIGNALS IN PRESENCE OF STATIONARY NOISE Debasis Kundu and Swagata Nandi Indian Institute of Technology, Kanpur and Indian Statistical Institute

More information

Amplitude Modulated Model For Analyzing Non Stationary Speech Signals

Amplitude Modulated Model For Analyzing Non Stationary Speech Signals Amplitude Modulated Model For Analyzing on Stationary Speech Signals Swagata andi, Debasis Kundu and Srikanth K. Iyer Institut für Angewandte Mathematik Ruprecht-Karls-Universität Heidelberg Im euenheimer

More information

An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals

An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals An Efficient and Fast Algorithm for Estimating the Parameters of Sinusoidal Signals Swagata Nandi 1 Debasis Kundu Abstract A computationally efficient algorithm is proposed for estimating the parameters

More information

An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals

An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals isid/ms/8/ November 6, 8 http://www.isid.ac.in/ statmath/eprints An Efficient and Fast Algorithm for Estimating the Parameters of Two-Dimensional Sinusoidal Signals Swagata Nandi Anurag Prasad Debasis

More information

Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal

Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Asymptotic of Approximate Least Squares Estimators of Parameters Two-Dimensional Chirp Signal Rhythm Grover, Debasis Kundu,, and Amit Mitra Department of Mathematics, Indian Institute of Technology Kanpur,

More information

Noise Space Decomposition Method for two dimensional sinusoidal model

Noise Space Decomposition Method for two dimensional sinusoidal model Noise Space Decomposition Method for two dimensional sinusoidal model Swagata Nandi & Debasis Kundu & Rajesh Kumar Srivastava Abstract The estimation of the parameters of the two dimensional sinusoidal

More information

On Two Different Signal Processing Models

On Two Different Signal Processing Models On Two Different Signal Processing Models Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 15, 2015 Outline First Model 1 First Model 2 3 4 5 6 Outline First Model 1

More information

EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL

EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL EFFICIENT ALGORITHM FOR ESTIMATING THE PARAMETERS OF CHIRP SIGNAL ANANYA LAHIRI, & DEBASIS KUNDU,3,4 & AMIT MITRA,3 Abstract. Chirp signals play an important role in the statistical signal processing.

More information

On Parameter Estimation of Two Dimensional Chirp Signal

On Parameter Estimation of Two Dimensional Chirp Signal On Parameter Estimation of Two Dimensional Chirp Signal Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract Two dimensional (-D) chirp signals occur in different areas of image processing. In this paper,

More information

Determination of Discrete Spectrum in a Random Field

Determination of Discrete Spectrum in a Random Field 58 Statistica Neerlandica (003) Vol. 57, nr., pp. 58 83 Determination of Discrete Spectrum in a Random Field Debasis Kundu Department of Mathematics, Indian Institute of Technology Kanpur, Kanpur - 0806,

More information

Estimating Periodic Signals

Estimating Periodic Signals Department of Mathematics & Statistics Indian Institute of Technology Kanpur Most of this talk has been taken from the book Statistical Signal Processing, by D. Kundu and S. Nandi. August 26, 2012 Outline

More information

On Least Absolute Deviation Estimators For One Dimensional Chirp Model

On Least Absolute Deviation Estimators For One Dimensional Chirp Model On Least Absolute Deviation Estimators For One Dimensional Chirp Model Ananya Lahiri & Debasis Kundu, & Amit Mitra Abstract It is well known that the least absolute deviation (LAD) estimators are more

More information

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling François Bachoc former PhD advisor: Josselin Garnier former CEA advisor: Jean-Marc Martinez Department

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology

More information

The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models

The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models The Uniform Weak Law of Large Numbers and the Consistency of M-Estimators of Cross-Section and Time Series Models Herman J. Bierens Pennsylvania State University September 16, 2005 1. The uniform weak

More information

LIST OF PUBLICATIONS

LIST OF PUBLICATIONS LIST OF PUBLICATIONS Papers in referred journals [1] Estimating the ratio of smaller and larger of two uniform scale parameters, Amit Mitra, Debasis Kundu, I.D. Dhariyal and N.Misra, Journal of Statistical

More information

2-D SENSOR POSITION PERTURBATION ANALYSIS: EQUIVALENCE TO AWGN ON ARRAY OUTPUTS. Volkan Cevher, James H. McClellan

2-D SENSOR POSITION PERTURBATION ANALYSIS: EQUIVALENCE TO AWGN ON ARRAY OUTPUTS. Volkan Cevher, James H. McClellan 2-D SENSOR POSITION PERTURBATION ANALYSIS: EQUIVALENCE TO AWGN ON ARRAY OUTPUTS Volkan Cevher, James H McClellan Georgia Institute of Technology Atlanta, GA 30332-0250 cevher@ieeeorg, jimmcclellan@ecegatechedu

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

Estimating the numberof signals of the damped exponential models

Estimating the numberof signals of the damped exponential models Computational Statistics & Data Analysis 36 (2001) 245 256 www.elsevier.com/locate/csda Estimating the numberof signals of the damped exponential models Debasis Kundu ;1, Amit Mitra 2 Department of Mathematics,

More information

Nonconcave Penalized Likelihood with A Diverging Number of Parameters

Nonconcave Penalized Likelihood with A Diverging Number of Parameters Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized

More information

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN Dedicated to the memory of Mikhail Gordin Abstract. We prove a central limit theorem for a square-integrable ergodic

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University

A Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University EDICS category SP 1 A Subspace Approach to Estimation of Autoregressive Parameters From Noisy Measurements 1 Carlos E Davila Electrical Engineering Department, Southern Methodist University Dallas, Texas

More information

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme

More information

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek J. Korean Math. Soc. 41 (2004), No. 5, pp. 883 894 CONVERGENCE OF WEIGHTED SUMS FOR DEPENDENT RANDOM VARIABLES Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek Abstract. We discuss in this paper the strong

More information

A New Two Sample Type-II Progressive Censoring Scheme

A New Two Sample Type-II Progressive Censoring Scheme A New Two Sample Type-II Progressive Censoring Scheme arxiv:609.05805v [stat.me] 9 Sep 206 Shuvashree Mondal, Debasis Kundu Abstract Progressive censoring scheme has received considerable attention in

More information

Frequency estimation by DFT interpolation: A comparison of methods

Frequency estimation by DFT interpolation: A comparison of methods Frequency estimation by DFT interpolation: A comparison of methods Bernd Bischl, Uwe Ligges, Claus Weihs March 5, 009 Abstract This article comments on a frequency estimator which was proposed by [6] and

More information

Discussion of High-dimensional autocovariance matrices and optimal linear prediction,

Discussion of High-dimensional autocovariance matrices and optimal linear prediction, Electronic Journal of Statistics Vol. 9 (2015) 1 10 ISSN: 1935-7524 DOI: 10.1214/15-EJS1007 Discussion of High-dimensional autocovariance matrices and optimal linear prediction, Xiaohui Chen University

More information

Spatial autoregression model:strong consistency

Spatial autoregression model:strong consistency Statistics & Probability Letters 65 (2003 71 77 Spatial autoregression model:strong consistency B.B. Bhattacharyya a, J.-J. Ren b, G.D. Richardson b;, J. Zhang b a Department of Statistics, North Carolina

More information

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators Robert M. de Jong Department of Economics Michigan State University 215 Marshall Hall East

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Doubly Indexed Infinite Series

Doubly Indexed Infinite Series The Islamic University of Gaza Deanery of Higher studies Faculty of Science Department of Mathematics Doubly Indexed Infinite Series Presented By Ahed Khaleel Abu ALees Supervisor Professor Eissa D. Habil

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Minimax design criterion for fractional factorial designs

Minimax design criterion for fractional factorial designs Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Consistency of test based method for selection of variables in high dimensional two group discriminant analysis

Consistency of test based method for selection of variables in high dimensional two group discriminant analysis https://doi.org/10.1007/s42081-019-00032-4 ORIGINAL PAPER Consistency of test based method for selection of variables in high dimensional two group discriminant analysis Yasunori Fujikoshi 1 Tetsuro Sakurai

More information

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2 EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME Xavier Mestre, Pascal Vallet 2 Centre Tecnològic de Telecomunicacions de Catalunya, Castelldefels, Barcelona (Spain) 2 Institut

More information

An Akaike Criterion based on Kullback Symmetric Divergence in the Presence of Incomplete-Data

An Akaike Criterion based on Kullback Symmetric Divergence in the Presence of Incomplete-Data An Akaike Criterion based on Kullback Symmetric Divergence Bezza Hafidi a and Abdallah Mkhadri a a University Cadi-Ayyad, Faculty of sciences Semlalia, Department of Mathematics, PB.2390 Marrakech, Moroco

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly

More information

Acomplex-valued harmonic with a time-varying phase is a

Acomplex-valued harmonic with a time-varying phase is a IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998 2315 Instantaneous Frequency Estimation Using the Wigner Distribution with Varying and Data-Driven Window Length Vladimir Katkovnik,

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

Kaczmarz algorithm in Hilbert space

Kaczmarz algorithm in Hilbert space STUDIA MATHEMATICA 169 (2) (2005) Kaczmarz algorithm in Hilbert space by Rainis Haller (Tartu) and Ryszard Szwarc (Wrocław) Abstract The aim of the Kaczmarz algorithm is to reconstruct an element in a

More information

The strong law of large numbers for arrays of NA random variables

The strong law of large numbers for arrays of NA random variables International Mathematical Forum,, 2006, no. 2, 83-9 The strong law of large numbers for arrays of NA random variables K. J. Lee, H. Y. Seo, S. Y. Kim Division of Mathematics and Informational Statistics,

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Mean Likelihood Frequency Estimation

Mean Likelihood Frequency Estimation IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 7, JULY 2000 1937 Mean Likelihood Frequency Estimation Steven Kay, Fellow, IEEE, and Supratim Saha Abstract Estimation of signals with nonlinear as

More information

FRACTIONAL FACTORIAL DESIGNS OF STRENGTH 3 AND SMALL RUN SIZES

FRACTIONAL FACTORIAL DESIGNS OF STRENGTH 3 AND SMALL RUN SIZES FRACTIONAL FACTORIAL DESIGNS OF STRENGTH 3 AND SMALL RUN SIZES ANDRIES E. BROUWER, ARJEH M. COHEN, MAN V.M. NGUYEN Abstract. All mixed (or asymmetric) orthogonal arrays of strength 3 with run size at most

More information

Hypothesis Testing Based on the Maximum of Two Statistics from Weighted and Unweighted Estimating Equations

Hypothesis Testing Based on the Maximum of Two Statistics from Weighted and Unweighted Estimating Equations Hypothesis Testing Based on the Maximum of Two Statistics from Weighted and Unweighted Estimating Equations Takeshi Emura and Hisayuki Tsukuma Abstract For testing the regression parameter in multivariate

More information

Bispectral resolution and leakage effect of the indirect bispectrum estimate for different types of 2D window functions

Bispectral resolution and leakage effect of the indirect bispectrum estimate for different types of 2D window functions Bispectral resolution and leakage effect of the indirect bispectrum estimate for different types of D window functions Teofil-Cristian OROIAN, Constantin-Iulian VIZITIU, Florin ŞERBAN Communications and

More information

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego Direction of Arrival Estimation: Subspace Methods Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu Reference Books and Papers 1 Optimum Array Processing, H L Van Trees 2 Stoica, P,

More information

Bayesian Nonparametric Point Estimation Under a Conjugate Prior

Bayesian Nonparametric Point Estimation Under a Conjugate Prior University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda

More information

Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions

Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Arabin Kumar Dey & Debasis Kundu Abstract Recently Kundu and Gupta ( Bivariate generalized exponential distribution,

More information

On the Comparison of Fisher Information of the Weibull and GE Distributions

On the Comparison of Fisher Information of the Weibull and GE Distributions On the Comparison of Fisher Information of the Weibull and GE Distributions Rameshwar D. Gupta Debasis Kundu Abstract In this paper we consider the Fisher information matrices of the generalized exponential

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Extreme inference in stationary time series

Extreme inference in stationary time series Extreme inference in stationary time series Moritz Jirak FOR 1735 February 8, 2013 1 / 30 Outline 1 Outline 2 Motivation The multivariate CLT Measuring discrepancies 3 Some theory and problems The problem

More information

Lecture 8 Inequality Testing and Moment Inequality Models

Lecture 8 Inequality Testing and Moment Inequality Models Lecture 8 Inequality Testing and Moment Inequality Models Inequality Testing In the previous lecture, we discussed how to test the nonlinear hypothesis H 0 : h(θ 0 ) 0 when the sample information comes

More information

When is a nonlinear mixed-effects model identifiable?

When is a nonlinear mixed-effects model identifiable? When is a nonlinear mixed-effects model identifiable? O. G. Nunez and D. Concordet Universidad Carlos III de Madrid and Ecole Nationale Vétérinaire de Toulouse Abstract We consider the identifiability

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials Govindan Rangarajan a) Department of Mathematics and Centre for Theoretical Studies, Indian Institute of Science, Bangalore 560 012,

More information

Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation

Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation CENTER FOR COMPUTER RESEARCH IN MUSIC AND ACOUSTICS DEPARTMENT OF MUSIC, STANFORD UNIVERSITY REPORT NO. STAN-M-4 Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation

More information

Analysis of Middle Censored Data with Exponential Lifetime Distributions

Analysis of Middle Censored Data with Exponential Lifetime Distributions Analysis of Middle Censored Data with Exponential Lifetime Distributions Srikanth K. Iyer S. Rao Jammalamadaka Debasis Kundu Abstract Recently Jammalamadaka and Mangalam (2003) introduced a general censoring

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

Practical Spectral Estimation

Practical Spectral Estimation Digital Signal Processing/F.G. Meyer Lecture 4 Copyright 2015 François G. Meyer. All Rights Reserved. Practical Spectral Estimation 1 Introduction The goal of spectral estimation is to estimate how the

More information

Performance Bounds for Polynomial Phase Parameter Estimation with Nonuniform and Random Sampling Schemes

Performance Bounds for Polynomial Phase Parameter Estimation with Nonuniform and Random Sampling Schemes IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 2, FEBRUARY 2000 331 Performance Bounds for Polynomial Phase Parameter Estimation with Nonuniform Rom Sampling Schemes Jonathan A. Legg, Member, IEEE,

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS

PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS Statistica Sinica 15(2005), 831-840 PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS Florin Vaida University of California at San Diego Abstract: It is well known that the likelihood sequence of the EM algorithm

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

Lesson 2: Analysis of time series

Lesson 2: Analysis of time series Lesson 2: Analysis of time series Time series Main aims of time series analysis choosing right model statistical testing forecast driving and optimalisation Problems in analysis of time series time problems

More information

LTI Systems, Additive Noise, and Order Estimation

LTI Systems, Additive Noise, and Order Estimation LTI Systems, Additive oise, and Order Estimation Soosan Beheshti, Munther A. Dahleh Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts

More information

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,

More information

Geometric Skew-Normal Distribution

Geometric Skew-Normal Distribution Debasis Kundu Arun Kumar Chair Professor Department of Mathematics & Statistics Indian Institute of Technology Kanpur Part of this work is going to appear in Sankhya, Ser. B. April 11, 2014 Outline 1 Motivation

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Bull. Korean Math. Soc. 52 (205), No. 3, pp. 825 836 http://dx.doi.org/0.434/bkms.205.52.3.825 A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Yongfeng Wu and Mingzhu

More information

ORTHOGONAL ARRAYS OF STRENGTH 3 AND SMALL RUN SIZES

ORTHOGONAL ARRAYS OF STRENGTH 3 AND SMALL RUN SIZES ORTHOGONAL ARRAYS OF STRENGTH 3 AND SMALL RUN SIZES ANDRIES E. BROUWER, ARJEH M. COHEN, MAN V.M. NGUYEN Abstract. All mixed (or asymmetric) orthogonal arrays of strength 3 with run size at most 64 are

More information

Model selection using penalty function criteria

Model selection using penalty function criteria Model selection using penalty function criteria Laimonis Kavalieris University of Otago Dunedin, New Zealand Econometrics, Time Series Analysis, and Systems Theory Wien, June 18 20 Outline Classes of models.

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With

More information

Inference for High Dimensional Robust Regression

Inference for High Dimensional Robust Regression Department of Statistics UC Berkeley Stanford-Berkeley Joint Colloquium, 2015 Table of Contents 1 Background 2 Main Results 3 OLS: A Motivating Example Table of Contents 1 Background 2 Main Results 3 OLS:

More information

Research Article Frequent Oscillatory Behavior of Delay Partial Difference Equations with Positive and Negative Coefficients

Research Article Frequent Oscillatory Behavior of Delay Partial Difference Equations with Positive and Negative Coefficients Hindawi Publishing Corporation Advances in Difference Equations Volume 2010, Article ID 606149, 15 pages doi:10.1155/2010/606149 Research Article Frequent Oscillatory Behavior of Delay Partial Difference

More information

INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia

INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS Quanlei Fang and Jingbo Xia Abstract. Suppose that {e k } is an orthonormal basis for a separable, infinite-dimensional Hilbert

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

A. Relationship of DSP to other Fields.

A. Relationship of DSP to other Fields. 1 I. Introduction 8/27/2015 A. Relationship of DSP to other Fields. Common topics to all these fields: transfer function and impulse response, Fourierrelated transforms, convolution theorem. 2 y(t) = h(

More information