LECTURES IN ECONOMETRIC THEORY. John S. Chipman. University of Minnesota
|
|
- Alexandrina Taylor
- 5 years ago
- Views:
Transcription
1 LCTURS IN CONOMTRIC THORY John S. Chipman University of Minnesota Chapter 5. Minimax estimation 5.. Stein s theorem and the regression model. It was pointed out in Chapter 2, section 2.2, that if no a priori knowledge is specified concerning β, the criterion of minimization of a matrix-valued meansquare error is ill posed. Nevertheless, there are some cases in which the choice of a particular scalar-valued definition of mean-square error makes it possible to obtain estimators with lower mean-square error than the Gauss-Markoff estimator, for all β. In such cases, the Gauss-Markoff estimator is inadmissible in the sense of Wald (95). A situation of this kind was first discovered by Stein (956). Let us now consider Stein s formulation. In the model (5..) y = Xβ + ε, ε =, εε = σ 2 Ω let the risk of any estimator ˇβ of β be defined as the scalar-valued mean-square error (5..2) Risk ˇβ =( ˇβ β) X Ω X( ˇβ β). We also define the modified F -statistic (5..3) f = ( β β) X Ω X( β β) ε Ω ε which corresponds to the case Ψ = I k and α = β and differs from the F -statistic (3.48) by the factor (n k)/r where in this case r = k. In Stein s formulation it is essential that normality is assumed, i.e., y N(Xβ,σ 2 Ω), where Ω is assumed to be positive definite. Stein considered only the special case Ω = I n and X =[I k, k (n k) ]. The following adaptation of Stein s result to the regression model is based largely on that of Sclove (968). Theorem 5... Let the estimator ˆβ ν be defined by (5..4) ˆβν = β ν f ( β β) = ( ν ) β + ν f f β Typeset by AMS-TX
2 2 JOHN S. CHIPMAN where β = X y =(X Ω X) X Ω y and β β is an initial guess at β, and (5..6) ε = y X β =[I XX ]y =[I X(X Ω X) X Ω ]y. Let it be assumed that y N(Xβ,σ 2 Ω),k 3. LetRisk ˆβ ν be defined by (5..2), and f by (5..3). Then Risk ˆβ ν < Risk β for all ν in the interval (5..7) <ν< and Risk ˆβ ν is minimized when 2(k 2) n k +2, (5..8) ν = k 2 n k +2. Proof. The deviations of ˆβ ν and β from β are respectively ˆβ ν β = β β ν f ( β β) and β β = X ε =(X Ω X) X Ω ε. Hence, since β is an unbiased estimator, and taking account of the symmetry of Ω XX and idempotency of XX, (5..9) Risk β =( β β) X Ω X( β β) =ε X X Ω XX ε =ε Ω XX ε =tr [XX εε Ω ] = σ 2 tr [XX ]=σ 2 tr [X X] = σ 2 tr I k = σ 2 k. Likewise, Risk ˆβ ν = ( ˆβ ν β) X Ω X( ˆβ ν β) [ ] [ ν = β β f ( β β) X Ω X β β ν ] f ( β β) = σ 2 ( k 2ν β β) X Ω X( β β) f + ν 2 ( β β) X Ω X( β β) (5..) f 2. From (5..9) and (5..3) this yields (5..) Risk β Risk ˆβ ν =2ν ( ε Ω ε) ( β β)x Ω X( β β) ( β β)x Ω X( β β) ν 2 ( ε Ω ε) 2 ( β β) X Ω X( β β).
3 LCTURS IN CONOMTRIC THORY 3 Now, since β β = X ε and ε =[I XX ]ε, it follows that, since XX Ω is symmetric and X X = I, ( β β) ε =X εε [I XX ] = σ 2 X Ω[I X X ] = σ 2 X [I XX ]Ω =, hence β and ε, being normally distributed vectors, are independent. Consequently, any functions of these random variables are independent. (5..) therefore becomes: (5..2) Risk β Risk ˆβ ν =2ν( ε Ω ε) ( β β)x Ω X( β β) ( β β)x Ω X( β β) ν 2 ( ε Ω ε) 2 ( β β) X Ω X( β β). In what follows we show, firstly, that (5..3) ( ε Ω ε) =σ 2 (n k) (Lemma 5.. below); secondly, that (5..4) ( ε Ω ε) 2 = σ 4 (n k)(n k +2) (this follows from the corollary to Lemma below, since ε Ω ε/σ2 χ 2 (n k)); and thirdly (Lemma below the James-Stein lemma ) that (5..5) ( β β)x Ω X( β β) ( β β)x Ω X( β β) =(k 2)σ 2 ( β β) X Ω X( β β) From (5..3), (5..4), and (5..5) it follows that (5..2) becomes. (5..6) Risk β Risk ˆβ ν = σ 4 (n k)ν2(k 2) ν(n k +2) ( β β) X Ω X( β β), and thus Risk ˆβ ν < Risk β if and only if (5..7) holds. Minimizing (5..6) with respect to ν we obtain (5..8), and Risk ˆβ ν becomes Risk ˆβ min Risk ˆβ ν ν (5..7) = σ 4 (n k)(k 2)2 k n k +2 ( β β) X Ω X( β β).
4 4 JOHN S. CHIPMAN To form an idea of the likely quantitative relationship between the Stein estimator ˆβ and the least-squares estimator β, we may relate the f-ratio to the squared multiple correlation coefficient R 2, which may be written (5..8) R 2 = ε Ω ε y Ω y = y Ω XX y y Ω. y From the definition (5..3) of f we have, for the case β =, (5..9) f = y Ω [I XX ]y y Ω = R2 y R 2. Substituting (5..9) in (5..4) for the value (5..8) of ν it becomes ( (5..2) ˆβ = k 2 R 2 ) β n k +2 R 2 We see immediately from (5..2) that if the regression relation (5..) has a good fit, in the sense that it has a multiple correlation coefficient close to (or more generally from (5..3) and (5..4), if f is large, so that an f-test strongly rejects the null hypothesis that β = β), then the Stein estimator will differ very little from the least-squares estimator. On the other hand, if an f-test does not reject this null hypothesis, or if (in the case β =)R 2 is rather low, then the Stein estimator can be expected to differ substantially from the least-squares estimator; however, this is precisely the case in which one may have doubts concerning whether the model (5..) is correctly specified. Thus, Stein estimation is best suited to cases in which one has great confidence in the model specification but insufficient data to give rise to a significant correlation coefficient. In the following section the fundamental lemmas underlying the above result will be proved. It will be convenient first, however, to reduce the above formulation to the normalized form treated in section 5.2. Let K be a k k nonsingular matrix such that K X Ω XK = I k, and define (5..2) z = σ K ( β β) and ζ =z = σ K (β β), so that (5..22) z ζ = σ K ( β β) =σ K X ε. Then (5..23) z z = ( β β) X Ω X( β β) σ 2 and (5..24) z (z ζ) = ( β β) X Ω X( β β) σ 2. Now, (z ζ)(z ζ) =σ 2 K X εε X K = K X ΩX K = K (X Ω X) K = I k,
5 LCTURS IN CONOMTRIC THORY 5 hence z N(ζ,I). The James-Stein lemma (Lemma below) states that z (z ζ) (5..25) z =(k 2) z z. z From (5..24) and (5..25) we have clearly ( (5..26) β β) X Ω X( β β) z (z ζ) ( β β) X Ω X( β β) = z, z but (5..27) ( β β) X Ω X( β β) = σ 2 z, z hence from (5..25), (5..26), and (5..27), we obtain (5..5) above. Since the above formula (5..3) is elementary, and does not depend on the normality of ε, its derivation is given here: Lemma 5... Let ε be defined by (5..6); then formula (5..3) holds. Proof. From the idempotency of XX and the symmetry of Ω XX,wehave (5..28) ( ε Ω ε) =ε [I XX ] Ω [I XX ]ε =ε Ω [I XX ]ε =tr[i XX ]εε Ω = σ 2 tr [I XX ] = σ 2 (n k) Lemmas underlying Stein s theorem. We now prove the basic lemmas underlying Theorem 5... Lemma 5.2., due to Stein (974), underlies the James-Stein (96) lemma (Lemma 5.2.2). Lemma 5.3, due to fron and Morris (976), provides a corollary which furnishes a derivation of formula (5..4). In the statements and proofs of these lemmas we shall follow the convention of denoting random variables by upper-case letters and their realizations by lower-case letters. Lemma 5.2. (Stein). Let N(ζ,), andleth : R R be any absolutely continuous function with Lebesgue-measurable derivative h satisfying h(b) h(a) = b a h (z)dz for all a<band (5.2.) h () <. Then (5.2.2) Cov, h() =( ζ)h() =h (). Proof. It will be convenient to define x = z ζ, X = ζ, andg(x) =h(x + ζ). Then X N(, ), and (5.2.2) is equivalent to (5.2.3) g (X) =Xg(X),
6 6 JOHN S. CHIPMAN which will now be proved. From (5.2.) we have (5.2.4) g (X) =(2π) /2 g (x) e x2 /2 dx <. For <a<x, (5.2.5) g(x) g(a) = x a g (u)e u2 /2 e u2 /2 du /2 ex2 a g (u) e u2 /2 du, and from (5.2.4) it follows that for any ε> one can choose a sufficiently large so that (5.2.6) a g (u) e u2 /2 du < ε/2, and x sufficiently large so that (5.2.7) g(a)e x2 /2 <ε/2. Thus, from (5.2.5), (5.2.6), and (5.2.7), since g(x) g(a) g(x) g(a), (5.2.8) g(x)e x2 /2 < g(a)e x2 /2 + a g (u) e u2 /2 du < ε. Therefore, (5.2.9) lim x g(x)e x2 /2 =. A similar argument shows that (5.2.) lim x g(x)e x2 /2 =. Now, integrating by parts we obtain g (X) =(2π) /2 g (x)e x2 /2 dx =(2π) /2 [ g(x)e x2 /2 ] +(2π) /2 g(x)xe x2 /2 dx. The first term on the right vanishes by virtue of (5.2.9) and (5.2.), and the second term is just Xg(X). This proves (5.2.3). Stein s basic result follows from the following fundamental lemma due to James and Stein (96, pp ):
7 LCTURS IN CONOMTRIC THORY 7 Lemma (James & Stein). Let be a k random vector such that N(ζ,I k ).Then ( ζ) (5.2.) =(k 2). Proof. Let N(ζ,I k ) and define, for z =(z,z 2,...,z k ), h i (z) = z i z z. Denoting )i( =(, 2,..., i, i+,..., k ), we have by Lemma ( ) (i ζ i ) i ( i ζ i ) i = )i( ( ( ) ) i = i )i( ( ) = 22 i ( ) 2 )i( = 22 i ( ) 2. Thus, ( ζ) = k 2 ( ) 2 =(k 2). The following lemma, due to fron & Morris (976), underlies the derivation of formula (5..4). Lemma (fron & Morris). Let W be distributed according to the gamma density (5.2.2) f(w) = wa e w for <w<, a>, where Γ(a) =(a )! Γ(a) Then, for any absolutely continuous and continuously differentiable function h : R + R such that (5.2.3) h(w ) <, h (W ) <, Wh(W ) <, Wh (W ) <, we have (5.2.4) Cov ( W, h(w ) ) =(W W )h(w ) =Wh (W ). Proof. Integrating by parts, we have (5.2.5) Wh (W ) = wf(w)h (w)dw [ ] = wf(w)h(w) [f(w)+wf (w)] h(w)dw.
8 8 JOHN S. CHIPMAN The first term on the right vanishes, since Wh(W ) = wh(w)f(w)dw < by (5.2.3), hence lim w wh(w)f(w) =. From (5.2.2), (5.2.6) f(w)+wf (w) = Γ(a) wa e w + w[(a )w a 2 e w w a e w ] = Γ(a) wa e w (a w) = (w a)f(w). Therefore, from (5.2.5) and (5.2.6), (5.2.7) Wh (W ) = (w a)h(w)f(w)dw =(W a)h(w ). To establish (5.2.4) it remains only to verify that W = a. The momentgenerating function of (5.2.2) is, for t<, m W (t) =e tw = Defining y =( t)w, this may be written m W (t) =( t) a y a e y Γ(a) w a e ( t)w dw. Γ(a) dy =( t) a f(y)dy =( t) a. The mean of f is W = m W () = a, aswastobeshown. Corollary to Lemma Let U be distributed as chi-square with d degrees of freedom. Then U 2 = d(d +2). Proof. If U χ 2 (d) thenw = U/2 has the gamma distribution with mean a = d/2. Defining h(w )=W we have from (5.2.4): (5.2.8) W 2 =(W ) 2 +W = a(a +). Consequently, (5.2.9) U 2 =4W 2 =4a(a +)=d(d +2), as was to be shown.
9 LCTURS IN CONOMTRIC THORY 9 References Chipman, John S. Statistical Problems Arising in the Theory of Aggregation, in Paruchuri R. Krishnaiah, ed., Applications of Statistics. Amsterdam: North- Holland Publishing Company, 977, pp fron, Bradley, and Carl Morris. Families of Minimax stimators of the Mean of a Multivariate Normal Distribution, Annals of Statistics, 4 (January 976), 2. James, W., and Charles Stein. stimation with Quadratic Loss, Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Vol. I (Berkeley and Los Angeles: University of California Press, 96), Perlman, M. S. Reduced Mean Square rror stimation for Several Parameters, Sankhyā [B], 34 (972), Sclove, L. Stanley. Improved stimators for Coefficients in Linear Regression, Journal of the American Statistical Association, 63 (June 968), Sclove, L. Stanley, Carl Morris, and R. Radhakrishnan. Non-optimality of Preliminary-Test stimators for the Mean of a Multivariate Normal Distribution, Annals of Mathematical Statistics, 43 (October 972), Stein, Charles. Inadmissibility of the Usual stimator for the Mean of a Multivariate Normal Distribution, Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Vol. I (Berkeley and Los Angeles: University of California Press, 956), Stein, Charles. Multiple Regression, in Ingram Olkin (ed.), Contributions to Probability and Statistics. ssays in Honor of Harold Hotelling (Stanford, California: Stanford University Press, 96), Stein, Charles. An Approach to the Recovery of Interblock Information in Balanced Incomplete Block Designs, in F. N. David (ed.), Research Papers in Statistics: Festschrift for J. Neyman (New York: John Wiley & Sons, 966), Stein, Charles. stimation of the Mean of a Multivariate Normal Distribution, in Jaroslav Hájek (ed.), Proceedings of the Prague Symposium on Asymptotic Statistics, 3 6 September 973, Vol. II. (Prague: Charles University, 974), (Previously issued as Technical Report No. 48, June 26, 973, Department of Statistics, Stanford University, Stanford, California.) Theil, Henri. conomic Forecasts and Policy, 2nd edition. Amsterdam: North- Holland Publishing Co., 96. Wald, Abraham. Statistical Decision Functions. New York: John Wiley & Sons, 95.
Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility
American Economic Review: Papers & Proceedings 2016, 106(5): 400 404 http://dx.doi.org/10.1257/aer.p20161082 Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility By Gary Chamberlain*
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationUNIVERSITY OF. AT ml^ '^ LIBRARY URBANA-CHAMPAIGN BOOKSTACKS
UNIVERSITY OF AT ml^ '^ LIBRARY URBANA-CHAMPAIGN BOOKSTACKS Digitized by the Internet Archive in 2011 with funding from University of Illinois Urbana-Champaign http://www.archive.org/details/onpostdatamodele68judg
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationStudentization and Prediction in a Multivariate Normal Setting
Studentization and Prediction in a Multivariate Normal Setting Morris L. Eaton University of Minnesota School of Statistics 33 Ford Hall 4 Church Street S.E. Minneapolis, MN 55455 USA eaton@stat.umn.edu
More informationStatistical Inference with Monotone Incomplete Multivariate Normal Data
Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful
More informationA note on the equality of the BLUPs for new observations under two linear models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationMSE Performance of the Weighted Average. Estimators Consisting of Shrinkage Estimators
MSE Performance of the Weighted Average Estimators Consisting of Shrinkage Estimators Akio Namba Kazuhiro Ohtani March 215 Discussion Paper No.1513 GRADUATE SCHOOL OF ECONOMICS KOBE UNIVERSITY ROKKO, KOBE,
More informationStatistics & Decisions 7, (1989) R. Oldenbourg Verlag, München /89 $
Statistics & Decisions 7, 377-382 (1989) R. Oldenbourg Verlag, München 1989-0721-2631/89 $3.00 + 0.00 A NOTE ON SIMULTANEOUS ESTIMATION OF PEARSON MODES L. R. Haff 1 ) and R. W. Johnson 2^ Received: Revised
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationStatement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.
MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss
More informationBiostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options.
1 Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam Name: KEY Problems do not have equal value and some problems will take more time than others. Spend your time wisely. You do
More informationLECTURES IN ECONOMETRIC THEORY. John S. Chipman. University of Minnesota
LECTURES IN ECONOMETRIC THEORY John S. Chipman University of Minnesota Chapter 4. The treatment of linear restrictions 4.1. Estimation subject to linear restrictions. In the regression model (4.1.1) y
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationHAPPY BIRTHDAY CHARLES
HAPPY BIRTHDAY CHARLES MY TALK IS TITLED: Charles Stein's Research Involving Fixed Sample Optimality, Apart from Multivariate Normal Minimax Shrinkage [aka: Everything Else that Charles Wrote] Lawrence
More informationThe LIML Estimator Has Finite Moments! T. W. Anderson. Department of Economics and Department of Statistics. Stanford University, Stanford, CA 94305
The LIML Estimator Has Finite Moments! T. W. Anderson Department of Economics and Department of Statistics Stanford University, Stanford, CA 9435 March 25, 2 Abstract The Limited Information Maximum Likelihood
More informationASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata
ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Errata Effective July 5, 3, only the latest edition of this manual will have its errata
More informationThe Statistical Property of Ordinary Least Squares
The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting
More informationEconometric Methods. Prediction / Violation of A-Assumptions. Burcu Erdogan. Universität Trier WS 2011/2012
Econometric Methods Prediction / Violation of A-Assumptions Burcu Erdogan Universität Trier WS 2011/2012 (Universität Trier) Econometric Methods 30.11.2011 1 / 42 Moving on to... 1 Prediction 2 Violation
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions
More informationTopics in Probability and Statistics
Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X
More informationEconometrics Master in Business and Quantitative Methods
Econometrics Master in Business and Quantitative Methods Helena Veiga Universidad Carlos III de Madrid Models with discrete dependent variables and applications of panel data methods in all fields of economics
More informationOPTIMAL CRITICAL VALUES FOR PRE-TESTING IN REGRESSION
Econometrica, Vol. 44, No. 2 (March, 1976) OPTIMAL CRITICAL VALUES FOR PRE-TESTING IN REGRESSION BY T. TOYODA AND T. D. WALLACE In this paper we derive present optimal critical points for pre-tests in
More informationThe unbearable transparency of Stein estimation
IMS Collections Nonparametrics and Robustness in Modern Statistical Inference and Time Series Analysis: A Festschrift in honor of Professor Jana Jurečková Vol. 7 (2010) 25 34 c Institute of Mathematical
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationPROCEEDINGS of the THIRD. BERKELEY SYMPOSIUM ON MATHEMATICAL STATISTICS AND PROBABILITY
PROCEEDINGS of the THIRD. BERKELEY SYMPOSIUM ON MATHEMATICAL STATISTICS AND PROBABILITY Held at the Statistical IAboratory Universi~ of California De_mnbiJ954 '!dy and August, 1955 VOLUME I CONTRIBUTIONS
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationSiegel s formula via Stein s identities
Siegel s formula via Stein s identities Jun S. Liu Department of Statistics Harvard University Abstract Inspired by a surprising formula in Siegel (1993), we find it convenient to compute covariances,
More informationEcon 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )
Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=
More informationCOMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS
Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department
More informationThis model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that
Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationCarl N. Morris. University of Texas
EMPIRICAL BAYES: A FREQUENCY-BAYES COMPROMISE Carl N. Morris University of Texas Empirical Bayes research has expanded significantly since the ground-breaking paper (1956) of Herbert Robbins, and its province
More informationTesting Some Covariance Structures under a Growth Curve Model in High Dimension
Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping
More informationTesting Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA
Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box 90251 Durham, NC 27708, USA Summary: Pre-experimental Frequentist error probabilities do not summarize
More informationNew insights into best linear unbiased estimation and the optimality of least-squares
Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto
More informationSolutions for Econometrics I Homework No.3
Solutions for Econometrics I Homework No3 due 6-3-15 Feldkircher, Forstner, Ghoddusi, Pichler, Reiss, Yan, Zeugner April 7, 6 Exercise 31 We have the following model: y T N 1 X T N Nk β Nk 1 + u T N 1
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 11, 2012 Outline Heteroskedasticity
More informationThe Linear Regression Model
The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general
More informationSTA 2101/442 Assignment 3 1
STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance
More informationMean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances
METRON - International Journal of Statistics 2008, vol. LXVI, n. 3, pp. 285-298 SHALABH HELGE TOUTENBURG CHRISTIAN HEUMANN Mean squared error matrix comparison of least aquares and Stein-rule estimators
More informationShrinkage Estimation of the Slope Parameters of two Parallel Regression Lines Under Uncertain Prior Information
Shrinkage Estimation of the Slope Parameters of two Parallel Regression Lines Under Uncertain Prior Information Shahjahan Khan Department of Mathematics & Computing University of Southern Queensland Toowoomba,
More information3 Multiple Linear Regression
3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is
More informationThe Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing
1 The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing Greene Ch 4, Kennedy Ch. R script mod1s3 To assess the quality and appropriateness of econometric estimators, we
More informationMerging Gini s Indices under Quadratic Loss
Merging Gini s Indices under Quadratic Loss S. E. Ahmed, A. Hussein Department of Mathematics and Statistics University of Windsor Windsor, Ontario, CANADA N9B 3P4 E-mail: ahmed@uwindsor.ca M. N. Goria
More informationParameter Estimation, Sampling Distributions & Hypothesis Testing
Parameter Estimation, Sampling Distributions & Hypothesis Testing Parameter Estimation & Hypothesis Testing In doing research, we are usually interested in some feature of a population distribution (which
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationInstrumental Variables
Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationASM Study Manual for Exam P, Second Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata
ASM Study Manual for Exam P, Second Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Errata Effective July 5, 3, only the latest edition of this manual will have its errata
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationRegression #5: Confidence Intervals and Hypothesis Testing (Part 1)
Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose
More informationRegression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T,
Regression Analysis The multiple linear regression model with k explanatory variables assumes that the tth observation of the dependent or endogenous variable y t is described by the linear relationship
More informationUC Berkeley CUDARE Working Papers
UC Berkeley CUDARE Working Papers Title A Semi-Parametric Basis for Combining Estimation Problems Under Quadratic Loss Permalink https://escholarship.org/uc/item/8z5jw3 Authors Judge, George G. Mittelhammer,
More informationCHANGE DETECTION IN TIME SERIES
CHANGE DETECTION IN TIME SERIES Edit Gombay TIES - 2008 University of British Columbia, Kelowna June 8-13, 2008 Outline Introduction Results Examples References Introduction sunspot.year 0 50 100 150 1700
More informationTESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST
Econometrics Working Paper EWP0402 ISSN 1485-6441 Department of Economics TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Lauren Bin Dong & David E. A. Giles Department
More informationGeneralized Method of Moment
Generalized Method of Moment CHUNG-MING KUAN Department of Finance & CRETA National Taiwan University June 16, 2010 C.-M. Kuan (Finance & CRETA, NTU Generalized Method of Moment June 16, 2010 1 / 32 Lecture
More informationMinimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.
Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of
More informationSo far our focus has been on estimation of the parameter vector β in the. y = Xβ + u
Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,
More informationJournal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh
Journal of Statistical Research 007, Vol. 4, No., pp. 5 Bangladesh ISSN 056-4 X ESTIMATION OF AUTOREGRESSIVE COEFFICIENT IN AN ARMA(, ) MODEL WITH VAGUE INFORMATION ON THE MA COMPONENT M. Ould Haye School
More informationHypothesis Testing for Var-Cov Components
Hypothesis Testing for Var-Cov Components When the specification of coefficients as fixed, random or non-randomly varying is considered, a null hypothesis of the form is considered, where Additional output
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........
More informationBayesian statistics: Inference and decision theory
Bayesian statistics: Inference and decision theory Patric Müller und Francesco Antognini Seminar über Statistik FS 28 3.3.28 Contents 1 Introduction and basic definitions 2 2 Bayes Method 4 3 Two optimalities:
More informationErrata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA
Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Effective July 5, 3, only the latest edition of this manual will have its
More informationThe purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.
Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That
More informationMethodology Review: Applications of Distribution Theory in Studies of. Population Validity and Cross Validity. James Algina. University of Florida
Distribution Theory 1 Methodology eview: Applications of Distribution Theory in Studies of Population Validity and Cross Validity by James Algina University of Florida and H. J. Keselman University of
More informationLecture 3. Inference about multivariate normal distribution
Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates
More information1 Appendix A: Matrix Algebra
Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix
More informationA Likelihood Ratio Test
A Likelihood Ratio Test David Allen University of Kentucky February 23, 2012 1 Introduction Earlier presentations gave a procedure for finding an estimate and its standard error of a single linear combination
More informationLikelihood Ratio Criterion for Testing Sphericity from a Multivariate Normal Sample with 2-step Monotone Missing Data Pattern
The Korean Communications in Statistics Vol. 12 No. 2, 2005 pp. 473-481 Likelihood Ratio Criterion for Testing Sphericity from a Multivariate Normal Sample with 2-step Monotone Missing Data Pattern Byungjin
More informationDA Freedman Notes on the MLE Fall 2003
DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar
More informationApproximate interval estimation for EPMC for improved linear discriminant rule under high dimensional frame work
Hiroshima Statistical Research Group: Technical Report Approximate interval estimation for PMC for improved linear discriminant rule under high dimensional frame work Masashi Hyodo, Tomohiro Mitani, Tetsuto
More informationBiostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options.
1 Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam Name: Problems do not have equal value and some problems will take more time than others. Spend your time wisely. You do not
More informationBasic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:
8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions
More informationMathematics for Economics ECON MA/MSSc in Economics-2017/2018. Dr. W. M. Semasinghe Senior Lecturer Department of Economics
Mathematics for Economics ECON 53035 MA/MSSc in Economics-2017/2018 Dr. W. M. Semasinghe Senior Lecturer Department of Economics MATHEMATICS AND STATISTICS LERNING OUTCOMES: By the end of this course unit
More informationBayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function
Communications in Statistics Theory and Methods, 43: 4253 4264, 2014 Copyright Taylor & Francis Group, LLC ISSN: 0361-0926 print / 1532-415X online DOI: 10.1080/03610926.2012.725498 Bayesian Estimation
More informationSOME ASPECTS OF MULTIVARIATE BEHRENS-FISHER PROBLEM
SOME ASPECTS OF MULTIVARIATE BEHRENS-FISHER PROBLEM Junyong Park Bimal Sinha Department of Mathematics/Statistics University of Maryland, Baltimore Abstract In this paper we discuss the well known multivariate
More informationROBUST - September 10-14, 2012
Charles University in Prague ROBUST - September 10-14, 2012 Linear equations We observe couples (y 1, x 1 ), (y 2, x 2 ), (y 3, x 3 ),......, where y t R, x t R d t N. We suppose that members of couples
More informationUnderstanding Regressions with Observations Collected at High Frequency over Long Span
Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University
More informationEstimation, admissibility and score functions
Estimation, admissibility and score functions Charles University in Prague 1 Acknowledgements and introduction 2 3 Score function in linear model. Approximation of the Pitman estimator 4 5 Acknowledgements
More informationEmpirical Power of Four Statistical Tests in One Way Layout
International Mathematical Forum, Vol. 9, 2014, no. 28, 1347-1356 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2014.47128 Empirical Power of Four Statistical Tests in One Way Layout Lorenzo
More informationConsistency of test based method for selection of variables in high dimensional two group discriminant analysis
https://doi.org/10.1007/s42081-019-00032-4 ORIGINAL PAPER Consistency of test based method for selection of variables in high dimensional two group discriminant analysis Yasunori Fujikoshi 1 Tetsuro Sakurai
More informationMULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationPhD Qualifying Examination Department of Statistics, University of Florida
PhD Qualifying xamination Department of Statistics, University of Florida January 24, 2003, 8:00 am - 12:00 noon Instructions: 1 You have exactly four hours to answer questions in this examination 2 There
More informationPart 1.) We know that the probability of any specific x only given p ij = p i p j is just multinomial(n, p) where p k1 k 2
Problem.) I will break this into two parts: () Proving w (m) = p( x (m) X i = x i, X j = x j, p ij = p i p j ). In other words, the probability of a specific table in T x given the row and column counts
More informationTest Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics
Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests
More information114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector,
A^VÇÚO 1n ò 1Ï 2015c4 Chinese Journal of Applied Probability and Statistics Vol.31 No.2 Apr. 2015 Optimal Estimator of Regression Coefficient in a General Gauss-Markov Model under a Balanced Loss Function
More informationAnswers to Problem Set #4
Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2
More informationSome General Types of Tests
Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.
More informationLecture 15. Hypothesis testing in the linear model
14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma
More informationPerformance of the 2shi Estimator Under the Generalised Pitman Nearness Criterion
University of Wollongong Research Online Faculty of Business - Economics Working Papers Faculty of Business 1999 Performance of the 2shi Estimator Under the Generalised Pitman Nearness Criterion T. V.
More informationEstimation of the Mean Vector in a Singular Multivariate Normal Distribution
CIRJE--930 Estimation of the Mean Vector in a Singular Multivariate Normal Distribution Hisayuki Tsukuma Toho University Tatsuya Kubokawa The University of Tokyo April 2014 CIRJE Discussion Papers can
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth
More informationLinear Model Under General Variance
Linear Model Under General Variance We have a sample of T random variables y 1, y 2,, y T, satisfying the linear model Y = X β + e, where Y = (y 1,, y T )' is a (T 1) vector of random variables, X = (T
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationGeneralized Multivariate Rank Type Test Statistics via Spatial U-Quantiles
Generalized Multivariate Rank Type Test Statistics via Spatial U-Quantiles Weihua Zhou 1 University of North Carolina at Charlotte and Robert Serfling 2 University of Texas at Dallas Final revision for
More informationSpace Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses
Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William
More information