Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh
|
|
- Reynard Paul
- 5 years ago
- Views:
Transcription
1 Journal of Statistical Research 007, Vol. 4, No., pp. 5 Bangladesh ISSN X ESTIMATION OF AUTOREGRESSIVE COEFFICIENT IN AN ARMA(, ) MODEL WITH VAGUE INFORMATION ON THE MA COMPONENT M. Ould Haye School of Mathematics and Statistics 5 Colonel By, Carleton University, Canada KS 5B6 ouldhaye@math.carleton.ca summary In this paper we investigate the asymptotic properties of various estimators of autocorrelation parameter of an ARMA(,) model when uncertain non sample prior information on the moving average component is available. In particular we study the preliminary test and the shrinkage estimators of the autocorrelation parameter and we compare their efficiency with respect to the maximum likelihood estimator designated as the unrestricted estimator. It is shown that near the prior information on MA parameter, both preliminary test and shrinkage estimators are superior to the MLE while they lose their superiority as the MA parameter moves away from the prior information although preliminary test estimator gains its efficiency to some extent but the shrinkage estimator attains its lower bound of its efficiency. Keywords and phrases: Time series, ARMA processes, sub hypotheses, uncertain non sample prior information, restricted, preliminary test and shrinkage estimators, asymptotic relative efficiency Introduction Classical estimators of unknown parameters are based exclusively on the sample data. The notion of non sample information has been introduced to improve the quality of the estimators. We expect that the inclusion of additional information would lead to a better estimator. Since the seminal work of Bancroft (944) on preliminary test estimators, many papers in the area of the so called improved estimation have been published. Stein (956, 98) developed the shrinkage estimator for multivariate normal population and proved that it performs better than the usual maximum likelihood estimator in terms of the square error loss function. Saleh (006) explored these two classes of improved estimators in a variety of contexts in his recent book. Many other researchers have been working in this area, c Institute of Statistical Research and Training (ISRT), University of Dhaka, Dhaka 000, Bangladesh. This research was supported by NSERC, grant no 8888
2 HAYE notably Sclove et al. (97), Judge and Bock (978), and Efron and Morris (97, 973). This type of analysis is also useful to control overmodelling of specific models in question. More specific reasons and benefits of such analysis may be found in the recent book of Saleh (006). To the best of our knowledge, these alternative estimators have not yet been applied to ARMA model parameter estimation. We consider the ARMA(,) model given by X t ρx t = ɛ t αɛ t. (.) where ɛ t are i.i.d. N (0, σ ). We are interested in the estimation of the autoregressive parameter ρ when it is suspected but one is not sure that α = α 0. Such a situation may arise when there is prior information that α = α 0 or is very close to α 0 and we want to reduce the number of parameters to be estimated in the model (.) and still improve on estimating ρ with better efficiency. More specifically we consider four estimators, namely ) maximum likelihood estimator MLE, ρ n (the unrestricted estimator); ) restricted estimator, ˆρ n which generally performs better than the MLE when α is equal to α 0 (or very close to it), but if α is away from α 0 the restricted estimator may be considerably poorer than the MLE. (see (Saleh, 99) in the over modeling context); 3) the preliminary test estimator PTE, ˆρ P n T. This estimator may be useful in case of uncertainty about the prior information α = α 0. We attempt to strike a compromise between ρ n and ˆρ n via an indicator function depending on the size γ of the preliminary test on α and 4) the shrinkage estimator ˆρ S n (see saleh (006)), which is basically a smoothed version of the PTE. We obtain explicit forms of the asymptotic distributional bias (ADB) and the asymptotic distributional MSE (ADMSE) of each of these estimators. We give a detailed comparison study of the relative efficiency of these estimators. Put h(z) = ρz and g(z) = αz and assume that ρ and α are less than. We know (see (Dzhaparidze 986)) that the spectral density of the process X t is given by f(λ) = σ g(z) with z = e iλ π h(z) Its covariance function r(k) satisfies the recurrence equation r(k) ρr(k ) = 0, k =, 3,... Let θ = (ρ, α, σ ) be the unknown parameter of the ARMA(,) model in (.). The
3 Estimation of Autoregressive Coefficient... 3 maximum likelihood estimator MLE ( ρ n, α n, σ n ) of θ satisfies where π π I n (λ) h(z) g(z) 4 cos λ α n ] dλ = 0 (.) π π and σ n = I n (λ) g(z) cos λ ρ n ] dλ = 0 (.3) π π I n (λ) = I n (λ) ρ n α n dλ (.4) n X j e ijλ j= is the periodogram of X, X,..., X n. The limit of the Fisher information matrix is given by Γ θ = ρ ρα ρα 0 α σ 4 If we write θ = (θ, θ, θ 3 ), each entry (j, k) of Γ θ corresponds to 4π π π θ j ln f(λ) θ k ln f(λ)dλ For example the value ( ρα) in Γ θ is obtained as equal to 4π π π ln f(λ) ln f(λ)dλ, α ρ using the variable change z = e iλ, and taking γ = {z : z = } and applying the residuals theorem. The covariance matrix of ( ρ n, α n, σ n ) is given by Σ n = n(ρ α) ( ρ )( ρα) ( ρ )( α )( ρα) 0 ( ρ )( α )( ρα) ( α )( ρα) 0 +o(/n). 0 0 σ 4 (ρ α) Also we know (see (Dzhaparidze 986)) that as n, n( ρn ρ) = N (0, Σ) (.5) n( αn α)
4 4 HAYE where Σ = ( ρ )( ρα) ( ρ )( α )( ρα) (ρ α). (.6) ( ρ )( α )( ρα) ( α )( ρα) If we approximate the vector n( ρ n ρ, α n α) by its asymptotic normal distribution with the previous matrix we get after some calculations and E( ρ n ρ α n α) = ρ ρα ( α n α) (.7) E Var( ρ n α n α) ] = ( ρ ). (.8) The equality (.7) suggests one to consider the restricted estimator of ρ when α = α 0 as ˆρ n = ρ n ρ n ρ n α 0 ( α n α 0 ). (.9) Now due to uncertainty that α = α 0, we test H 0 : α = α 0 versus H a : α α 0 based on the test statistic L n = n( α n α 0 ) ( ρ n α 0 ) ( α 0 )( ρ nα 0 ) = n( α n α 0 ) (ρ α 0 ) ( α 0 )( ρα 0) + o P () (.0) which under H 0 and given (.5) and (.6), has asymptotically a χ distribution. Next we define the preliminary test estimator (PTE) as ˆρ P n T = ˆρ n I {Ln<χ (γ)} + ρ n I {Ln χ (γ)} (.) = ρ n ( ρ n ˆρ n )I {Ln<χ (γ)} = ρ n ρ n ρ n α 0 ( α n α 0 )I {Ln<χ (γ)} = ρ n ρ ρα 0 ( α n α 0 )I {Ln<χ (γ)} + o P () Instead of making extreme choices between ˆρ n and ρ n using the PTE, a nicer compromise would be a smooth choice that depends on the value of L n. This can be reached by considering ˆρ S n = ρ n ( ρ n ρˆ n ) c Ln where c is some constant = ρ n c( α 0) / ( ρ n) n ρn α 0 α n α o ( α n α 0 ) (.) = ρ n c( α 0) / ( ρ ) n ρ α0 α α 0 ( α n α 0 ) + o P (/ n)
5 Estimation of Autoregressive Coefficient... 5 Bias and MSE under Contiguous Alternatives We consider the alternative H n : α n = α 0 + δ n where δ 0. We give two theorems: one for the bias and the other one is for the MSE. Theorem. We have i) ii) n( αn α 0 ) = N ( δ, (ρ α 0 ) ( α 0 )( ρα 0) ) (.) lim E n( ρ n ρ)] = 0 lim E n(ˆρ n ρ) ] = lim E n( ρ n ρ ] ρ lim E n ] n( αn α o ) ρ n α 0 where we put iii) = 0 ρ ρα o δ = ρ ρα o δ = C(ρ, α 0 ) C(ρ, α 0 ) = ( ρ )( α 0 ) / δ(ρ α 0 ) and = ρ α 0 ( α 0 )/ ( ρα 0 ) lim E n(ˆρ P T n ρ)] = C(ρ, α 0 ) H 3 (χ (γ), ) where H m (x, ) denotes the cdf at x of a noncentral χ distribution with m degrees of freedom and noncentrality parameter / and where χ m (γ) is the γ level critical value under H m (x, 0). iv) lim E n(ˆρ S n ρ)] = c( α 0 )/ ( ρ ) Φ( ) ] = c C(ρ, α 0 ) Φ( ) ] ρ α 0 where Φ is the cdf of the standard normal distribution. Theorem. and hence i) lim E n( ρ n ρ) ] = (ρ α 0 ) ( ρ )( ρα 0 ) AVar( n( α n ) = (ρ α 0 ) ( α 0)( ρα 0 ) = α 0 ρ AVar( n ρ n ), (.)
6 6 HAYE ii) iii) lim E n(ˆρ n ρ) ] = ( ρ ) + C (ρ, α 0 ) lim E n(ˆρ P n T ρ) ] = Var( { n ρ n ) K(ρ, α 0 ) H 3 (χ (γ), ) ( )}] H 3 (χ (γ), ) H 5 (χ (γ), ) where for a random variable Y n, AVar(Y n ) represents the asymptotic variance and K(ρ, α 0 ) = ( ρ )( α 0) ( ρα 0 ) vi) lim E n(ˆρ S n ρ) ] = AVar( n ρ n ) + K(ρ, α 0 ) ( ρα 0) ( )] (ρ α 0 ) e / π Proof. (Theorem ) Of course (.) is straightforward from (.5). i) follows from (.7). ii) Obvious. iii) We use the following result: (see Judge and Bock (978) and Saleh (006)). If Z N (, ) then E(Zf(Z )) = E f(χ 3 ( )) ] (.3) and E(Z f(z )) = E f(χ 3 ( )) ] + E f(χ 5 ( )) ] (.4) We will use these equalities with the function and referring to (.5), f(z) = I {Z<χ (γ)} Z = (ρ α 0) n( α n α 0 ) ( α 0 )/ ( ρα 0 ) = AVar( n α n ) ] / n( αn α 0 ) (.5) That is, asymptotically Z N (, ). We obtain lim E n(ˆρ P T n ρ) ] = ρ lim ρα E ( ] n( α n α 0 )I {Ln<χ 0 (γ)} = ρ ( α 0 )/ ( ρα 0 ) H 3 (χ ρα 0 (ρ α 0 ) (γ), ) = C(ρ, α 0 ) H 3 (χ (γ), δ ).
7 Estimation of Autoregressive Coefficient... 7 vi) Let U be a standard normal distribution. iv) follows from ( αn α ) 0 E α n α 0 = P n( α n α 0 ) > 0 ] P n( α n α 0 ) < 0 ] = P (U > ) P (U < ) = Φ( ) Proof. (Theorem ) i) can be shown from (.6) since α = α n α 0. ii) We have lim E n(ˆρ n ρ) ] = lim Var n(ˆρ n ρ) ) ] + lim E n(ˆρ n ρ) ]. We only need to show that lim Var n(ˆρ n ρ)] = ( ρ ) since lim E n(ˆρ n ρ) ] is given by ii) in theorem. lim Var n(ˆρ n ρ)] = lim = Var n( ρ n ρ) (ρ α 0 ) ( ρ )( ρα 0 ) ( ) ρ + ( α ρα 0)( ρα 0 ) 0 ρ n n( αn α 0 ) ] ρ n α 0 ρ ρα 0 ( ρ )( α 0)( ρα 0 ) = ( ρ ) iii) Using (.4), (.3), and (.7), and some conditional expectation arguments, we get ( ) lim E n(ˆρ P n T ρ) ] = lim n( ρn E ρ) ρ ] n( αn α 0 )I ρα {Ln<χ (γ)} 0 = AVar( n ρ n ) ( + ρ ρα 0 ( ) ρ AVar( n α n )H 3 (χ ρα (γ), ) 0 ) AVar( n α n ) H 3 (χ (γ), ) H 5 (χ (γ), ) ]. Replacing AVar( n α n ) by its value in (.) we get the right hand side of iii). vi) We will use the fact that if Z N (, ), then E( Z ) = π e / + Φ( ) ] (.6) ]
8 8 HAYE From (.), we can write (in a similar way as in the calculations in iii) and applying (.6) with Z as in (.5), ( ) lim E n(ˆρ S n ρ) ] = lim n( ρn E ρ) c( α 0) / ( ρ n) α n α ] 0 ρ n α 0 α n α 0 = AVar( n ρ n ) + c ( α 0 )( ρ ) (ρ α 0 ) c( α 0 )/ ( ρ ) ρ ( AVar( ) / n α n ) ρ α 0 ρα 0 ( / π e + Φ( ) ] ) δ Φ( ) ]] = AVar( n ρ n ) + ( α 0)( ρ ) ] (ρ α 0 ) c / c π e The value of c that minimizes this quantity is c = π /. Making c independent of e, we chose c = /π. Plugging this optimal value in the previous equality we get the minimal value for MSE lim E n(ˆρ S n ρ) ] = AVar( n ρ n ) + ( α 0)( ρ ) ] (ρ α 0 ) e / π which can be written in the form of the right hand side of vi). 3 MSE under Fixed Alternative Hypothesis In this section we show that under fixed alternative α = α 0 + δ (3.) there is no gain in terms of MSE when we consider the PTE estimator. Theorem 3. Under (3.) we have Proof. We have lim E n(ˆρ P T n ρ n) ] = 0 E n(ˆρ P T ρ) ] ( ρ ) = n n ( αn α 0 ) I ρ n α {Ln<χ (γ)} 0 = n ( α n α 0 ) ( (ρ α0 ) ( ρ ) ( α α 0 ρα 0 ρ α 0 )I {Ln<χ (γ)} + o P () 0 ( ρ ) ( = α ρ α 0 )L n I {Ln<χ (γ)} + o P (). 0
9 Estimation of Autoregressive Coefficient... 9 Thus we need to show that ] lim E L n I {Ln<χ (γ)} = 0. (3.) Under the fixed alternative hypothesis (3.), L n = Z where asymptotically Z N (, ) with = nδ (ρ α 0 ) ( α 0 )( ρα 0). Now using (.4) and writing = na and χ (γ) = b the expectation in (3.) can be written as P (χ 3 (na) b) + nap (χ 5 (na) b) (3.3) With U N (0, ), clearly each probability in (3.3) is bounded by P (U na) b] = P b + na U b + na] P U na b] P U na] / as n. π na e (na) The last equivalence is well known, which may be shown via integration by parts. Therefore (3.) is established. We mention that this is not the case for the smoothed version ρ S n or for the restricted estimator ˆρ n. Actually we can see that lim E n(ˆρ S n ρ n) ] = c ( α 0 )( ρ ) (ρ α 0 ). 4 Comparative Study 4. Comparing Quadratic Bias Functions We note that from theorem, the usual PMLE estimate ρ n is asymptotically unbiased and the alternative estimates ˆρ n, ˆρ P n T and ˆρ S n have the following quadratic bias functions: i) B(ˆρ n ) = C (ρ, α 0 ), and ii) B(ˆρ P T n ) = C (ρ, α 0 ) H 3, iii) B(ˆρ S n ) = c C (ρ, α 0 )Φ( ) ]. We note that for = 0, B ( ρ n ) = B (ˆρ n ) = B (ρ P T n ) = B (ρ S n ) = 0
10 0 HAYE i.e. all the estimators are asymptotically unbiased. Also as, we get B (ρ P T n ) 0 while B (ρ S n ) π K (ρ, α 0 ), if we keep the value π for c. Hence, as, 0 = B (ρ P T n ) < B (ρ S n) < B (ˆρ n ), and for a given, we always have B (ρ P n T ) < B (ˆρ n ) but the other comparisons will depend on both and the level γ. Clearly we observe that in terms of smallest bias, the PTE estimator is better than the unrestricted estimator but the shrinkage estimator can give good results if we decide to choose the constant c very small. However, the usual PMLE remains the best one since it is asymptotically unbiased. The following graphical presentation in Figure () shows these properties. Figure : Graph of the asymptotic quadratic bias of the three estimators. 4. Asymptotic Relative Efficiency (ARE) In this section we compare the proposed estimators to the PMLE which we call here unrestricted estimator (UE). This comparison will be based on the ADMSE and the so-called asymptotic relative efficiency (ARE).
11 Estimation of Autoregressive Coefficient... Comparing the PTE Against the UE: Denote by ARE(ˆρ P n T : ρ n ) the ARE of ˆρ P n T reciprocal MSE s. So we have ARE(ˆρ P n T : ρ n ) = MSE( n ρ n ) MSE( nˆρ P n T ) = with respect to ρ n. That is the quotient of their ] K(ρ, α 0 )H 3 + (H 3 H 5 ) In order to make notations easier, we omitted the arguments (χ (γ), ) in the functions H 3 and H 5. From the previous equation we conclude the following: The graph of ARE(ˆρ P n T : ρ n ) as a function of for fixed γ, is decreasing crossing the line to a minimum at = min (γ), then it increases towards the line as. Thus, if (H 3 /(H H 3 ) then ˆρ P n T is better than ρ while if (H 3 /(H H 3 ), ρ is better than ˆρ P n T. The maximum of ARE(ˆρ P n T : ρ n ) at = 0 is given by ], max ARE(ˆρP T ( ρ )( α n : ρ n ) = KH 3 K = 0) ( ρα 0 ) where H 3 = H 3 (χ (γ), 0) for all γ A, the set of all possible values of γ and K = K(ρ, α 0). The value of max ARE(ˆρP T n : ρ n ) decreases as γ increases, while if γ = 0 and varies, the graph of ARE(ˆρ P n T : ρ n ) = and ARE(ˆρ P n T : ρ n ) intersects at =. In general ARE(ˆρ P n T : ρ n ) at γ = γ and γ intersect within the interval 0, the value of at the intersection increases as γ increases, therefore, for two values of γ, the ARE(ˆρ P n T : ρ n ) s will always be below the line. In order to obtain optimum level of significance γ for the application of PTE, we prefix a minimum guaranteed efficiency, say E 0 and follow the following procedure i) If 0, use ρ n because ARE(ˆρ P n T : ρ n ) is always in this region of. However, is generally unknown and there is no way to choose uniformly best estimator and look for the value of γ in the set A γ = {γ ARE(ˆρ P T n : ρ n ) E 0 }. ii) The PT estimator chosen maximizes ARE(ˆρ P n T solve the equation : ρ n ) over all γ A γ and. Thus we min ARE(ˆρP T n : ρ n ) = ARE(ˆρ P n T : ρ n )(γ, 0 ) = E 0. The solution γ obtained this way gives a PTE with minimum guaranteed efficiency of E 0 which may increase to ARE(ˆρ P n T : ρ n ) at = 0. The sample table in Table () gives the minimum and maximum ARE(ˆρ P n T : ρ n ) for chosen values of K over γ = 0.05(0.05)0.5. To illustrate the use of the table, let us set E 0 = 0.8 as the minimum guaranteed ARE(ˆρ P n T : ρ n ) for K = 0.5. Then we look for 0.8 or near it for K = 0.5 and find γ = 0.. Hence, using a PTE with γ = 0. we obtain a minimum guaranteed ARE of 0.8 with a possible maximum ARE of. if is close to 0.
12 HAYE Table : Maximum and minimum ARE of the PTE for selected K, γ values K γ 0.05 ARE max ARE min ARE max ARE min ARE max ARE min ARE max ARE min ARE max ARE min ARE max ARE min ARE max ARE min ARE max ARE min ARE max ARE min ARE max ARE min
13 Estimation of Autoregressive Coefficient... 3 Comparing the Shrinkage Estimator Against the UE We have ARE(ˆρ S n : ρ n ) = + ( ρ )( α 0 ) ]] (ρ α 0 ) e / π Following a similar discussion as we did for the relative efficiency for PTE, we can see from this equation that for ln 4 ˆρ S n is better than ρ n and for Also, as, ln 4 ARE(ˆρ S n : ρ n ) which is the lower bound of ARE(ˆρ S n : ρ n). ρ n is better than ˆρ S n ] + ( ρ )( α 0) (ρ α 0 ) π if The maximum ARE(ˆρ S n : ρ n) is π K ( ρα 0) ] KH3 (ρ α 0 ) (χ (γ), 0)] H 3 (χ (γ), 0) π ( ρα 0 ) (ρ α 0 ). Comparing the Restricted Estimator RE Against UE From vi) in theorem, we can rewrite the asymptotic MSE of nˆρ n as and we can write the MSE of n ρ n as and therefore we can conclude ( ρ ) + ( ρ ) α 0 (ρ α 0 ) ] ( ρ ) ( ρα 0) (ρ α 0 ) = ( ρ ) + ( ρ )( α 0 ) ] (ρ α 0 ) if then RE is better than UE and if then UE is better than RE The graphs in the Figure () of the ARE s depict the ARE properties of the estimators.
14 4 HAYE Figure : Graph of ARE for shrinkage estimator for selected values of γ, α, K. Acknowledgments The author is grateful Professor A. K. Md. E. Saleh for his invaluable help and his insightful suggestions about this paper. References ] Bancroft, T.A. (944). On biases in estimation due to the use of preliminary tests of significance. Ann. Math. Statist. 5, ] Dzhaparidze, K. (986). Parameter Estimation and Hypothesis Testing in Spectral Analysis of Stationary Time Series. Springer Verlag. 3] Efron, B. and Morris, C. (97). Limiting the risk of Bayes and Empirical Bayes estimators. Part II: the empirical Bayes case. J. Amer. Statist. Assoc.. 67,
15 Estimation of Autoregressive Coefficient ] Efron, B. Morris, C. (973). Stein s estimation rule and competitors. An empirical Bayes approach. J. Amer. Statist. Assoc. 68, ] Judge, G.G. and Bock, M.E. (978). The Statistical Implications of Pre test and Stein rule Estimation in Econometrics. North Holland, New York. 6] Saleh A.K.Md.E. (99.) On Shrinkage Estimation of The Parameters of An Autoregressive Gaussian Process. Theory of Probab. and its Appl ] Saleh, A.K.Md.E. (006). Theory of Preliminary Test and Stein Type Estimation With Applications. Wiley InterSience. Wiley & Sons Inc. New York. 8] Sclove, S.L., Morris, C. and Rao, C.R. (97). Non optimality of preliminary test estimators for the mean of a multivariate normal distribution. Ann. Math. Statis. 43, ] Stein, C. (956). Inadmissibility of the usual estimator for the mean of multivariate normal distribution. Proceedings of the 3rd Berkeley Symposium on Math. Statist. and Probab. Univ. of California Press, Berkeley., ] Stein, C. (98). Estimation of the mean of a multivariate normal distribution. Ann. of Statist. 9, 35 5.
Shrinkage Estimation of the Slope Parameters of two Parallel Regression Lines Under Uncertain Prior Information
Shrinkage Estimation of the Slope Parameters of two Parallel Regression Lines Under Uncertain Prior Information Shahjahan Khan Department of Mathematics & Computing University of Southern Queensland Toowoomba,
More informationTesting Equality of Two Intercepts for the Parallel Regression Model with Non-sample Prior Information
Testing Equality of Two Intercepts for the Parallel Regression Model with Non-sample Prior Information Budi Pratikno 1 and Shahjahan Khan 2 1 Department of Mathematics and Natural Science Jenderal Soedirman
More informationMerging Gini s Indices under Quadratic Loss
Merging Gini s Indices under Quadratic Loss S. E. Ahmed, A. Hussein Department of Mathematics and Statistics University of Windsor Windsor, Ontario, CANADA N9B 3P4 E-mail: ahmed@uwindsor.ca M. N. Goria
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationLecture 20 May 18, Empirical Bayes Interpretation [Efron & Morris 1973]
Stats 300C: Theory of Statistics Spring 2018 Lecture 20 May 18, 2018 Prof. Emmanuel Candes Scribe: Will Fithian and E. Candes 1 Outline 1. Stein s Phenomenon 2. Empirical Bayes Interpretation of James-Stein
More informationThomas J. Fisher. Research Statement. Preliminary Results
Thomas J. Fisher Research Statement Preliminary Results Many applications of modern statistics involve a large number of measurements and can be considered in a linear algebra framework. In many of these
More informationSiegel s formula via Stein s identities
Siegel s formula via Stein s identities Jun S. Liu Department of Statistics Harvard University Abstract Inspired by a surprising formula in Siegel (1993), we find it convenient to compute covariances,
More informationRidge Regression Estimation Approach to Measurement Error Model
Ridge Regression Estimation Approach to Measurement Error Model A.K.Md. Ehsanes Saleh Carleton University, Ottawa CANADA) E-mail: esaleh@math.carleton.ca Shalabh Department of Mathematics & Statistics
More informationLikelihood-Based Methods
Likelihood-Based Methods Handbook of Spatial Statistics, Chapter 4 Susheela Singh September 22, 2016 OVERVIEW INTRODUCTION MAXIMUM LIKELIHOOD ESTIMATION (ML) RESTRICTED MAXIMUM LIKELIHOOD ESTIMATION (REML)
More informationGoodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach
Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach By Shiqing Ling Department of Mathematics Hong Kong University of Science and Technology Let {y t : t = 0, ±1, ±2,
More informationKey Words: first-order stationary autoregressive process, autocorrelation, entropy loss function,
ESTIMATING AR(1) WITH ENTROPY LOSS FUNCTION Małgorzata Schroeder 1 Institute of Mathematics University of Białystok Akademicka, 15-67 Białystok, Poland mszuli@mathuwbedupl Ryszard Zieliński Department
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationEconometrics Summary Algebraic and Statistical Preliminaries
Econometrics Summary Algebraic and Statistical Preliminaries Elasticity: The point elasticity of Y with respect to L is given by α = ( Y/ L)/(Y/L). The arc elasticity is given by ( Y/ L)/(Y/L), when L
More informationSTA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources
STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various
More informationParametric Techniques
Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure
More informationg-priors for Linear Regression
Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,
More informationAsymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals
Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.
More informationFirst-order random coefficient autoregressive (RCA(1)) model: Joint Whittle estimation and information
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 9, Number, June 05 Available online at http://acutm.math.ut.ee First-order random coefficient autoregressive RCA model: Joint Whittle
More informationMean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances
METRON - International Journal of Statistics 2008, vol. LXVI, n. 3, pp. 285-298 SHALABH HELGE TOUTENBURG CHRISTIAN HEUMANN Mean squared error matrix comparison of least aquares and Stein-rule estimators
More informationCOMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS
COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS Y. ZHANG and A.I. MCLEOD Acadia University and The University of Western Ontario May 26, 2005 1 Abstract. A symbolic
More informationParametric Techniques Lecture 3
Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to
More informationV. Properties of estimators {Parts C, D & E in this file}
A. Definitions & Desiderata. model. estimator V. Properties of estimators {Parts C, D & E in this file}. sampling errors and sampling distribution 4. unbiasedness 5. low sampling variance 6. low mean squared
More informationPRE-TEST ESTIMATION OF THE REGRESSION SCALE PARAMETER WITH MULTIVARIATE STUDENT-t ERRORS AND INDEPENDENT SUB-SAMPLES
Sankhyā : The Indian Journal of Statistics 1994, Volume, Series B, Pt.3, pp. 334 343 PRE-TEST ESTIMATION OF THE REGRESSION SCALE PARAMETER WITH MULTIVARIATE STUDENT-t ERRORS AND INDEPENDENT SUB-SAMPLES
More informationEstimation of parametric functions in Downton s bivariate exponential distribution
Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationA Course on Advanced Econometrics
A Course on Advanced Econometrics Yongmiao Hong The Ernest S. Liu Professor of Economics & International Studies Cornell University Course Introduction: Modern economies are full of uncertainties and risk.
More informationFinancial Econometrics
Financial Econometrics Estimation and Inference Gerald P. Dwyer Trinity College, Dublin January 2013 Who am I? Visiting Professor and BB&T Scholar at Clemson University Federal Reserve Bank of Atlanta
More informationON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction
Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive
More informationResearch Statement. Mamikon S. Ginovyan. Boston University
Research Statement Mamikon S. Ginovyan Boston University 1 Abstract My research interests and contributions mostly are in the areas of prediction, estimation and hypotheses testing problems for second
More informationGood Exact Confidence Sets for a Multivariate Normal Mean
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 997 Good Exact Confidence Sets for a Multivariate Normal Mean Yu-Ling Tseng Lawrence D. Brown University of Pennsylvania
More informationQuantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing
Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October
More informationStatistical Inference On the High-dimensional Gaussian Covarianc
Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional
More informationAsymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions
More informationComparing two independent samples
In many applications it is necessary to compare two competing methods (for example, to compare treatment effects of a standard drug and an experimental drug). To compare two methods from statistical point
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 12: Frequentist properties of estimators (v4) Ramesh Johari ramesh.johari@stanford.edu 1 / 39 Frequentist inference 2 / 39 Thinking like a frequentist Suppose that for some
More informationEstimation of large dimensional sparse covariance matrices
Estimation of large dimensional sparse covariance matrices Department of Statistics UC, Berkeley May 5, 2009 Sample covariance matrix and its eigenvalues Data: n p matrix X n (independent identically distributed)
More informationOn Monitoring Shift in the Mean Processes with. Vector Autoregressive Residual Control Charts of. Individual Observation
Applied Mathematical Sciences, Vol. 8, 14, no. 7, 3491-3499 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.12988/ams.14.44298 On Monitoring Shift in the Mean Processes with Vector Autoregressive Residual
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More informationSF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES
SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES This document is meant as a complement to Chapter 4 in the textbook, the aim being to get a basic understanding of spectral densities through
More informationAnalysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems
Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems
The Evil of Supereciency P. Stoica B. Ottersten To appear as a Fast Communication in Signal Processing IR-S3-SB-9633 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems Signal Processing
More informationTime Series Prediction by Kalman Smoother with Cross-Validated Noise Density
Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract
More informationNonconcave Penalized Likelihood with A Diverging Number of Parameters
Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized
More information3.0.1 Multivariate version and tensor product of experiments
ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 3: Minimax risk of GLM and four extensions Lecturer: Yihong Wu Scribe: Ashok Vardhan, Jan 28, 2016 [Ed. Mar 24]
More informationMohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983),
Mohsen Pourahmadi PUBLICATIONS Books and Editorial Activities: 1. Foundations of Time Series Analysis and Prediction Theory, John Wiley, 2001. 2. Computing Science and Statistics, 31, 2000, the Proceedings
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference
More informationEstimation, Inference, and Hypothesis Testing
Chapter 2 Estimation, Inference, and Hypothesis Testing Note: The primary reference for these notes is Ch. 7 and 8 of Casella & Berger 2. This text may be challenging if new to this topic and Ch. 7 of
More informationAsymptotic Statistics-VI. Changliang Zou
Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous
More information(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.
Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationcovariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of
Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated
More informationStein-Rule Estimation under an Extended Balanced Loss Function
Shalabh & Helge Toutenburg & Christian Heumann Stein-Rule Estimation under an Extended Balanced Loss Function Technical Report Number 7, 7 Department of Statistics University of Munich http://www.stat.uni-muenchen.de
More informationA Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.
Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets
More informationUsing copulas to model time dependence in stochastic frontier models
Using copulas to model time dependence in stochastic frontier models Christine Amsler Michigan State University Artem Prokhorov Concordia University November 2008 Peter Schmidt Michigan State University
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationMultivariate Time Series: VAR(p) Processes and Models
Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with
More informationSpatial inference. Spatial inference. Accounting for spatial correlation. Multivariate normal distributions
Spatial inference I will start with a simple model, using species diversity data Strong spatial dependence, Î = 0.79 what is the mean diversity? How precise is our estimate? Sampling discussion: The 64
More informationStatistical Inference with Monotone Incomplete Multivariate Normal Data
Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful
More informationTIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.
TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION
More informationLecture 4: Types of errors. Bayesian regression models. Logistic regression
Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture
More informationInference on reliability in two-parameter exponential stress strength model
Metrika DOI 10.1007/s00184-006-0074-7 Inference on reliability in two-parameter exponential stress strength model K. Krishnamoorthy Shubhabrata Mukherjee Huizhen Guo Received: 19 January 2005 Springer-Verlag
More informationF9 F10: Autocorrelation
F9 F10: Autocorrelation Feng Li Department of Statistics, Stockholm University Introduction In the classic regression model we assume cov(u i, u j x i, x k ) = E(u i, u j ) = 0 What if we break the assumption?
More informationKALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)
PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)
More informationISyE 691 Data mining and analytics
ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)
More informationLecture 5 September 19
IFT 6269: Probabilistic Graphical Models Fall 2016 Lecture 5 September 19 Lecturer: Simon Lacoste-Julien Scribe: Sébastien Lachapelle Disclaimer: These notes have only been lightly proofread. 5.1 Statistical
More informationEstimation of the Bivariate and Marginal Distributions with Censored Data
Estimation of the Bivariate and Marginal Distributions with Censored Data Michael Akritas and Ingrid Van Keilegom Penn State University and Eindhoven University of Technology May 22, 2 Abstract Two new
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationParameter Estimation and Fitting to Data
Parameter Estimation and Fitting to Data Parameter estimation Maximum likelihood Least squares Goodness-of-fit Examples Elton S. Smith, Jefferson Lab 1 Parameter estimation Properties of estimators 3 An
More informationExtreme inference in stationary time series
Extreme inference in stationary time series Moritz Jirak FOR 1735 February 8, 2013 1 / 30 Outline 1 Outline 2 Motivation The multivariate CLT Measuring discrepancies 3 Some theory and problems The problem
More informationNon-Gaussian Maximum Entropy Processes
Non-Gaussian Maximum Entropy Processes Georgi N. Boshnakov & Bisher Iqelan First version: 3 April 2007 Research Report No. 3, 2007, Probability and Statistics Group School of Mathematics, The University
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationf (1 0.5)/n Z =
Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.
More informationA COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky
A COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky Empirical likelihood with right censored data were studied by Thomas and Grunkmier (1975), Li (1995),
More informationStudentization and Prediction in a Multivariate Normal Setting
Studentization and Prediction in a Multivariate Normal Setting Morris L. Eaton University of Minnesota School of Statistics 33 Ford Hall 4 Church Street S.E. Minneapolis, MN 55455 USA eaton@stat.umn.edu
More informationResearch Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance
Advances in Decision Sciences Volume, Article ID 893497, 6 pages doi:.55//893497 Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Junichi Hirukawa and
More informationTime series models in the Frequency domain. The power spectrum, Spectral analysis
ime series models in the Frequency domain he power spectrum, Spectral analysis Relationship between the periodogram and the autocorrelations = + = ( ) ( ˆ α ˆ ) β I Yt cos t + Yt sin t t= t= ( ( ) ) cosλ
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationAdjusted Empirical Likelihood for Long-memory Time Series Models
Adjusted Empirical Likelihood for Long-memory Time Series Models arxiv:1604.06170v1 [stat.me] 21 Apr 2016 Ramadha D. Piyadi Gamage, Wei Ning and Arjun K. Gupta Department of Mathematics and Statistics
More informationEMPIRICAL ENVELOPE MLE AND LR TESTS. Mai Zhou University of Kentucky
EMPIRICAL ENVELOPE MLE AND LR TESTS Mai Zhou University of Kentucky Summary We study in this paper some nonparametric inference problems where the nonparametric maximum likelihood estimator (NPMLE) are
More informationFixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility
American Economic Review: Papers & Proceedings 2016, 106(5): 400 404 http://dx.doi.org/10.1257/aer.p20161082 Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility By Gary Chamberlain*
More informationEstimation Theory Fredrik Rusek. Chapters 6-7
Estimation Theory Fredrik Rusek Chapters 6-7 All estimation problems Summary All estimation problems Summary Efficient estimator exists All estimation problems Summary MVU estimator exists Efficient estimator
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7
More informationAdvanced Econometrics
Advanced Econometrics Marco Sunder Nov 04 2010 Marco Sunder Advanced Econometrics 1/ 25 Contents 1 2 3 Marco Sunder Advanced Econometrics 2/ 25 Music Marco Sunder Advanced Econometrics 3/ 25 Music Marco
More informationChapter 14 Stein-Rule Estimation
Chapter 14 Stein-Rule Estimation The ordinary least squares estimation of regression coefficients in linear regression model provides the estimators having minimum variance in the class of linear and unbiased
More informationUnit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2
Unit roots in vector time series A. Vector autoregressions with unit roots Scalar autoregression True model: y t y t y t p y tp t Estimated model: y t c y t y t y t p y tp t Results: T j j is asymptotically
More informationExtremogram and ex-periodogram for heavy-tailed time series
Extremogram and ex-periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Zagreb, June 6, 2014 1 2 Extremal
More information1 Motivation for Instrumental Variable (IV) Regression
ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data
More informationUC Berkeley CUDARE Working Papers
UC Berkeley CUDARE Working Papers Title A Semi-Parametric Basis for Combining Estimation Problems Under Quadratic Loss Permalink https://escholarship.org/uc/item/8z5jw3 Authors Judge, George G. Mittelhammer,
More informationUNIVERSITY OF. AT ml^ '^ LIBRARY URBANA-CHAMPAIGN BOOKSTACKS
UNIVERSITY OF AT ml^ '^ LIBRARY URBANA-CHAMPAIGN BOOKSTACKS Digitized by the Internet Archive in 2011 with funding from University of Illinois Urbana-Champaign http://www.archive.org/details/onpostdatamodele68judg
More informationECE531 Lecture 10b: Maximum Likelihood Estimation
ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So
More informationShrinkage Estimation in High Dimensions
Shrinkage Estimation in High Dimensions Pavan Srinath and Ramji Venkataramanan University of Cambridge ITA 206 / 20 The Estimation Problem θ R n is a vector of parameters, to be estimated from an observation
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More information