(θ θ ), θ θ = 2 L(θ ) θ θ θ θ θ (θ )= H θθ (θ ) 1 d θ (θ )
|
|
- Mitchell Copeland
- 5 years ago
- Views:
Transcription
1 Setting RHS to be zero, 0= (θ )+ 2 L(θ ) (θ θ ), θ θ = 2 L(θ ) 1 (θ )= H θθ (θ ) 1 d θ (θ ) O =0 θ 1 θ 3 θ 2 θ Figure 1: The Newton-Raphson Algorithm where H is the Hessian matrix, d θ is the derivative matrix, and θ is the solution of this set of equations. Rewriting the above, θ = θ H θθ (θ ) 1 d θ (θ ). This turns out to be an updating formula of the estimates of θ. Notice if θ solves = 0, then (θ ) = (θ ) = d θ (θ ) = 0 and thus θ = θ. This suggest if θ θ 0, we need to iterate the formula, θ = θ H θ (θ ) 1 d θ (θ ) The updating sequence will not terminate until d θ (θ ) 0. More generally, ˆθ (n) = ˆθ (n 1) H θθ (ˆθ (n 1) ) 1 d θ (ˆθ (n 1) ) where ˆθ (n) is the estimates of θ at the end of nth iterations. 5 5 Accordingtoournotations,ˆθ 0 = θ, ˆθ1 = θ, ˆθ2 = θ 15
2 2. Scoring This method is to replace the Hessian matrix by the Fisher information matrix. Denote the information matrix for sample size T by F θθ,t. Because F θθ,t = E( 2 L ) it suggests replacing ( H θθ )byf θθ,t. In other words, the updating formula becomes ˆθ n = ˆθ (n 1) + F θθ,t (ˆθ (n 1) ) 1 d θ (ˆθ (n 1) ) Why should we do this? Some of times, F θθ,t is easier to compute than the Hessian matrix, because there are probably less elements to compute, for example F βσ 2,T (θ) =0 in the previous regression model, and also because this use information about the question we study. There is a variant of scoring 6 that deserves mentioning. Note that With LLN, we would expect F θθ,t = TE( ). T t t (ˆθ (n 1) ) t (ˆθ (n 1) ) F θθ,t (ˆθ (n 1) ) p F θθ (θ) where t ln ft =. 7 The above computation proposal simply shows that Econometrics is not just computer science. Taking advantage of the information at hand makes the problem solving easier. Now we look at an important case of nonlinear regression model where y t = g(x t,β)+u t,u t nid(0,σ 2 ) where x t areassumedtobefixed.sotheloglikelihoodare L(y 1,,y T θ) = 1 2 log 2π 1 2 log σ2 1 2Tσ 2 (yt g(x t,β)) 2. A bit calculations lead to β = 1 (yt g(x Tσ 2 t,β)) g(x t,β), β σ 2 = 1 2σ σ 4 T (yt g(x t,β)) 2 2 L β β = 1 σ 2 T g(xt,β) g(x t,β) + 1 (yt g(x β β Tσ 2 t,β)) 2 g(x t,β) β β 6 This is how LIMDEP, an econometric software, does. 7 The reason of dividing by T instead of times by T is that we have multiplied 1 T in log likelihood. 16
3 2 L β σ = 1 (yt g(x 2 Tσ 4 t,β)) g(x t,β), β Take expectation to get F θθ,t,where 2 L σ 2 σ 2 = 1 2σ 4 1 σ 6 T (yt g(x t,β)) 2 F ββ,t = 1 g(xt,β) Tσ 2 β The first relation is due to g(x t,β) β,f βσ 2,T =0,F σ 2 σ 2,T = 1 2σ 4. E((y t g(x t,β)) 2 g(x t,β) )= 2 g(x t,β) E(y β β β β t g(x t,β)) = 0 The scoring algorithm is [ ] [ ] ˆβn ˆβ(n 1) = + ˆσ 2 n ˆσ 2 (n 1) [ Fββ,T (ˆθ (n 1) ) F βσ 2,T (ˆθ (n 1) ) F σ 2 β,t(ˆθ (n 1) ) F σ 2 σ 2,T (ˆθ (n 1) ) Because F βσ 2,T (ˆθ (n 1) )=0=F σ 2 β,t(ˆθ (n 1) ). Therefore, ˆβ n = ˆβ (n 1) + F 1 ββ,t (ˆθ n 1 ) (ˆθ β n 1 ) = ˆβ 1 (n 1) +[ T ˆσ 2 (n 1) ĝt β ĝ t β ] 1 [ 1 T ˆσ 2 (n 1) ĝ t β ] 1 [ (y t g(x t, ˆβ (n 1) )) ĝt β ] = ˆβ (n 1) +[ ĝ t β = ˆβ (n 1) +[ z tz t ] 1 z t(y t g(x t, ˆβ (n 1) )) ] 1 [ (ˆθ ] β (n 1) ) (ˆθ σ 2 (n 1) ) T t=1 (y t g(x t, ˆβ (n 1) )) ĝt β ] where ĝt = g(xt,β) β β ˆβ(n 1) = z t. There is an regression explanation emerging from the derivation. Note that [ z tz t ] 1 z t(y t g(x t, ˆβ (n 1) )) is the estimated coefficients in the regression of y t g(x t, ˆβ (n 1) ) against z t. 8 In general, the updating procedure is an iterative least squares estimation. This is the so-called Gauss-Newton algorithm. In this case, the ML estimation amounts to nonlinear regression estimation, if the problem can be formulated in the way we see before. The Gauss-Newton algorithm is one of the simplest and most effective way of maximizing the likelihood. 7 Wald, LM, and LR tests: the trinity One of major goal for econometric exercises is to do inference from the observed data. Based on the MLE results, generally there are 3 important testing strategies we can undertake. 8 To see how this interpretation comes from, carefully compare the following simple regression model y t = x t β + e t, ˆβOLS =( x t x t) 1 ( x t y t) with the nonlinear regression where y t g(x t, ˆβ (n 1) ) plays the role of dependent variables as y t in the simple regression, and z t as the independent variables, x t. 17
4 They differ from each other by whether the restrictions under tests has been taken into account in the testing. In what follows, we will present them in order. Suppose we like to test if the true parameters, θ 0, satisfying the restrictions H 0 : Rθ 0 = r To have a concrete idea of what R and r look like, suppose we want to test if the production function is a Cobb-Douglass where the parameters should meet the constraint α 0 + β 0 =1. In correspondence to this example, R =[1, 1], θ 0 = [ α 0 ] β 0,r= 1. More generally, R can be a matrix and r be a vector, when more than one constraint are jointly under test. 7.1 Wald test The Wald test does not use information about the null hypothesis when forming the statistics. Suppose ˆθ is the estimates of θ 0, and possibly is estimated by maximum likelihood. So if the data is really drawn from the null, we should expect ˆθ is not much different from the true value θ 0. As a result, under the null of the hypothesis being tested, we expect Rˆθ r 0ifRθ 0 = r as ˆθ θ 0 ) So this is a subject that we can use to discriminate the null hypothesis from the alternative counterpart. This is because if the data is not drawn from the null, instead, Rˆθ r will be different from 0. The more the the difference shows, the stronger evidence is against the null. The question now is how we can employ this notion to construct an appropriate statistics that is powerful against the alternative. To do this with the Wald test, note that Rˆθ r = Rˆθ Rθ 0 + Rθ 0 r = R(ˆθ θ 0 ) where under H 0, Rθ 0 r = 0. However, the quantity R(ˆθ θ 0 ) is a random variable with some distribution. To investigate what the distribution is, observe the asymptotic normality for the MLE that T 1/2 (ˆθ θ 0 ) d N(0,F 1 θθ (θ 0)). Since R(ˆθ θ 0 )isjustalineartransformationofˆθ θ 0 ), it is straightforward to show that T 1/2 R(ˆθ θ 0 ) d N(0,RF 1 θθ (θ 0)R ) which gives the distribution we desired. It is natural to form the statistics W T (Rˆθ r) (RF 1 θθ (θ 0)R ) 1 (Rˆθ r) d χ 2 (dim(r)) 18
5 The asymptotic result comes from the facts that T 1/2 R(ˆθ θ 0 ) is normal, and that RF 1 θθ (θ 0)R is the scaling covariance. 9 It should be emphasized that in the Wald test statistics, the parameters of concern, ˆθ, are estimated without the information of the restrictions from the null. It is an unrestricted version of MLE. The Wald test statistics in fact are infeasible because F 1 θθ (θ 0) involves unknown parameters, θ 0. The test can be made feasible by replacing θ 0 with its consistent counterparts 10 ˆθ. So the test statistics takes the form W T (Rˆθ r) (RF 1 θθ,t (ˆθ)R ) 1 (Rˆθ r) The test statistics behave quite different under the alternative hypothesis where the constraint Rθ 0 r = 0 does not hold. Under the alternative, Rˆθ r is not close to 0, and the Wald test statistics is asymptotically a non-central χ 2 distribution. 7.2 LM test The major difference between the Lagrange multiplier (LM) test and the Wald test lies in whether the parameter estimates used are unrestricted or restricted. In the construction of the LM tests, the parameters are estimated with the information of the restrictions from the null. In this sense, the LM test statistics have lowest computation cost among the three test statistics under study. The restricted MLE is computed as follows: To solve the maximization question, we form max L(θ) subject to Rθ = r. L = L(θ)+λ (Rθ r) where λ is the Lagrange multiplier, and the LM test is a statistic based on this quantity. The first order conditions from the above are = (θ) + R λ =0, λ = Rθ r =0 Let ( θ, λ) be the solutions. where λ is an estimate of Lagrang multiplier. So ( θ)+r λ =0 9 Recall that if X N j (µ, Ω), then (X µ) Ω 1 (X µ) χ 2 (j). 10 The unrestricted MLE is consistent under both the null and the alternative. 19
6 Now under the null where the restrictions Rθ r = 0 are valid, imposition of the restrictions should therefore little change the likelihood. This implies that the Lagrange multiplier ( λ) should be very small, and thus ( θ) 0if λ 0 Thus, testing if λ = 0 is equivalent to testing if ( θ) =0.Buttestingif λ = 0 is really testing whether the restriction imposed on the estimation is correct. We can employ ( θ) =0to build test statistics. So the LM test statistics take the form LM T ( θ) F 1 θθ,t ( θ) ( θ) d χ 2 (dim(r)) Again under the H 0, the test statistics converge in distribution to χ distribution as the Wald test with degrees of freedom dim R. A few observations from the test statistics. First, it is natural to ask why is F θθ,t used as a scaling factor? To answer this, simply note that V ( )=E( )=F θθ,t. Since θ is unknown, again in practice replacing θ by θ would do the job. Second, why is there a T in front of the LM test? This is because we look at the average score when working on the maximization question, i,e. = 1 ln L. Furthermore, because the pdf is independent, T 1/2 it is easy to obtain the result that T N(0,F θθ). Collecting these arguments, the LM test statistics having an asymptotic χ 2 distribution is well expected. 7.3 LR test The LR test is another intuitive test statistics. It involves information from both restricted ML estimation and unrestricted one. Before spelling out the test statistics, first note that the ratio of the restricted likelihood to the unrestricted likelihood should be close to 1, under the null. In notations, λ = L ( θ) L (ˆθ), where θ and ˆθ are, respectively, restricted estimates and unrestricted estimates. The reason that this ratio is close to 1 under the null is that the restricted and unrestricted estimates are of similar magnitude, under the null. We have discussed this notion previously. Therefore, under H 0, θ ˆθ, and thus log λ 0. But to do inference, it is needed to know the distribution of the LM test statistics. Fortunately, such a result exists, as under H 0. 2logλ = 2[log L (ˆθ) log L ( θ)] = 2T [L(ˆθ) L( θ)] d χ 2 (dim(r)) 20
7 The LR test probably involve more computation cost than other alternative tests. This is because to compute the test statistics, both unrestricted and restricted ML estimation need to be performed first. Asymptotically, the three tests, Wald, LM and LR, are equivalent. But for a given data set, these tests differ from each other in small samples. Specifically we can obtain the order relation that LM < LR < W ald. That is, the LM test is most conservative, and the Wald test is most liberal. A figure that illustrates the difference among the 3 tests is contained below. L( θ) L( θ) LR φ(θ) =Rθ r L(θ) φ( θ) LM Wald 0 θ θ dl(θ)/dθ Figure 2: Three Asymptotically Equivalent Tests 8 Non-linear restrictions We now switch attention to the case of testing for nonlinear restrictions. An example of the nonlinear restriction could be β 1 β 2 β 3 =0 in contrast to the linear restriction β 1 + β 2 = 1 before. While the nonlinear restriction may appear to be quite difficult to deal with, testing for such restrictions turns out to be similar 21
8 to what we proceed in the linear case. In general, we are interested in testing for H 0 : φ(θ 0 )=0, where φ( ) = 0 is a known function. φ, though nonlinear, can be linearized by Taylor expansion. Suppose ˆθ is the unrestricted ML estimate. Taking Taylor expansion about φ(ˆθ) at θ 0, φ(ˆθ) =φ(θ 0 )+ φ (θ 0)(ˆθ θ 0 )+ 2 φ (θ 0)(ˆθ θ 2 0 ) 2 +. A bit calculation would lead to T 1/2 (φ(ˆθ) φ(θ 0 )) = φ (θ 0)T 1/2 (ˆθ θ 0 )+ 2 φ 2 (θ 0)[T 1/2 (ˆθ θ 0 ) 2 ]+ where φ(θ 0 ) = 0 under the null. Note that the second term on the RHS is asymptotically negligible because T 1/2 (ˆθ θ 0 ) d normal distribution and (ˆθ θ 0 ) p 0. Any higher terms are also asymptotically negligible by the same token. Therefore, asymptotically (in large sample) under the H 0, T 1/2 (φ(ˆθ)) φ (θ 0)T 1/2 (ˆθ θ 0 ) RT 1/2 (ˆθ θ 0 ) where R = φ (θ 0) is the first derivative matrix that contains constants as elements. The statistic now has the same expression as that in the linear case, except that R is the first derivative with respect to the parameters evaluated at θ 0 in the nonlinear case. Naturally, the Wald test is computed as W = T 1/2 (φ(ˆθ))var(φ(θ 0 )) 1 T 1/2 (φ(ˆθ)) Tφ(ˆθ)( ˆRF 1 θθ,t (ˆθ) ˆR ) 1 φ(ˆθ). The second equivalence is obtained by replacing unknown θ 0 with consistent estimates ˆθ. Equivalently, ˆR = φ (ˆθ). The aforementioned discussion concentrates on the Wald tests. Then, how to calculate both LM test and LR test under nonlinear restrictions? Because these 2 tests do not utilize the difference between the true parameters and the estimated counterpart to construct the statistic, calculating both tests remains the same except the restrictions of concern are nonlinear. 22
Statistics and econometrics
1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More informationIntroduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007
Hypothesis Testing Daniel Schmierer Econ 312 March 30, 2007 Basics Parameter of interest: θ Θ Structure of the test: H 0 : θ Θ 0 H 1 : θ Θ 1 for some sets Θ 0, Θ 1 Θ where Θ 0 Θ 1 = (often Θ 1 = Θ Θ 0
More informationChapter 4: Constrained estimators and tests in the multiple linear regression model (Part III)
Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III) Florian Pelgrin HEC September-December 2010 Florian Pelgrin (HEC) Constrained estimators September-December
More informationGreene, Econometric Analysis (6th ed, 2008)
EC771: Econometrics, Spring 2010 Greene, Econometric Analysis (6th ed, 2008) Chapter 17: Maximum Likelihood Estimation The preferred estimator in a wide variety of econometric settings is that derived
More informationLECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.
Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the
More information8. Hypothesis Testing
FE661 - Statistical Methods for Financial Engineering 8. Hypothesis Testing Jitkomut Songsiri introduction Wald test likelihood-based tests significance test for linear regression 8-1 Introduction elements
More informationThe outline for Unit 3
The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.
More informationIntroduction to Estimation Methods for Time Series models Lecture 2
Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:
More informationECON 5350 Class Notes Nonlinear Regression Models
ECON 5350 Class Notes Nonlinear Regression Models 1 Introduction In this section, we examine regression models that are nonlinear in the parameters and give a brief overview of methods to estimate such
More informationPOLI 8501 Introduction to Maximum Likelihood Estimation
POLI 8501 Introduction to Maximum Likelihood Estimation Maximum Likelihood Intuition Consider a model that looks like this: Y i N(µ, σ 2 ) So: E(Y ) = µ V ar(y ) = σ 2 Suppose you have some data on Y,
More informationSome General Types of Tests
Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.
More informationPractical Econometrics. for. Finance and Economics. (Econometrics 2)
Practical Econometrics for Finance and Economics (Econometrics 2) Seppo Pynnönen and Bernd Pape Department of Mathematics and Statistics, University of Vaasa 1. Introduction 1.1 Econometrics Econometrics
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationLikelihood-Based Methods
Likelihood-Based Methods Handbook of Spatial Statistics, Chapter 4 Susheela Singh September 22, 2016 OVERVIEW INTRODUCTION MAXIMUM LIKELIHOOD ESTIMATION (ML) RESTRICTED MAXIMUM LIKELIHOOD ESTIMATION (REML)
More informationEconometrics II - EXAM Answer each question in separate sheets in three hours
Econometrics II - EXAM Answer each question in separate sheets in three hours. Let u and u be jointly Gaussian and independent of z in all the equations. a Investigate the identification of the following
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationAnswer Key for STAT 200B HW No. 8
Answer Key for STAT 200B HW No. 8 May 8, 2007 Problem 3.42 p. 708 The values of Ȳ for x 00, 0, 20, 30 are 5/40, 0, 20/50, and, respectively. From Corollary 3.5 it follows that MLE exists i G is identiable
More informationEcon 583 Final Exam Fall 2008
Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random
More informationAdvanced Quantitative Methods: maximum likelihood
Advanced Quantitative Methods: Maximum Likelihood University College Dublin 4 March 2014 1 2 3 4 5 6 Outline 1 2 3 4 5 6 of straight lines y = 1 2 x + 2 dy dx = 1 2 of curves y = x 2 4x + 5 of curves y
More informationSpring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM
University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 14 GEE-GMM Throughout the course we have emphasized methods of estimation and inference based on the principle
More informationAdvanced Quantitative Methods: maximum likelihood
Advanced Quantitative Methods: Maximum Likelihood University College Dublin March 23, 2011 1 Introduction 2 3 4 5 Outline Introduction 1 Introduction 2 3 4 5 Preliminaries Introduction Ordinary least squares
More informationEcon 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE
Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Eric Zivot Winter 013 1 Wald, LR and LM statistics based on generalized method of moments estimation Let 1 be an iid sample
More informationMaximum Likelihood Tests and Quasi-Maximum-Likelihood
Maximum Likelihood Tests and Quasi-Maximum-Likelihood Wendelin Schnedler Department of Economics University of Heidelberg 10. Dezember 2007 Wendelin Schnedler (AWI) Maximum Likelihood Tests and Quasi-Maximum-Likelihood10.
More informationMaximum Likelihood (ML) Estimation
Econometrics 2 Fall 2004 Maximum Likelihood (ML) Estimation Heino Bohn Nielsen 1of32 Outline of the Lecture (1) Introduction. (2) ML estimation defined. (3) ExampleI:Binomialtrials. (4) Example II: Linear
More informationModel comparison and selection
BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)
More informationOutline of GLMs. Definitions
Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density
More information13.2 Example: W, LM and LR Tests
13.2 Example: W, LM and LR Tests Date file = cons99.txt (same data as before) Each column denotes year, nominal household expenditures ( 10 billion yen), household disposable income ( 10 billion yen) and
More informationEconomics 583: Econometric Theory I A Primer on Asymptotics
Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:
More informationExercises Chapter 4 Statistical Hypothesis Testing
Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 15: Examples of hypothesis tests (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 32 The recipe 2 / 32 The hypothesis testing recipe In this lecture we repeatedly apply the
More informationTesting Restrictions and Comparing Models
Econ. 513, Time Series Econometrics Fall 00 Chris Sims Testing Restrictions and Comparing Models 1. THE PROBLEM We consider here the problem of comparing two parametric models for the data X, defined by
More informationGeneralized Method of Moment
Generalized Method of Moment CHUNG-MING KUAN Department of Finance & CRETA National Taiwan University June 16, 2010 C.-M. Kuan (Finance & CRETA, NTU Generalized Method of Moment June 16, 2010 1 / 32 Lecture
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More informationBootstrap Testing in Nonlinear Models
!" # $%% %%% Bootstrap Testing in Nonlinear Models GREQAM Centre de la Vieille Charité 2 rue de la Charité 13002 Marseille, France by Russell Davidson russell@ehesscnrs-mrsfr and James G MacKinnon Department
More informationLecture 10: Generalized likelihood ratio test
Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual
More informationMISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30
MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)
More informationQuick Review on Linear Multiple Regression
Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,
More informationLikelihood Ratio tests
Likelihood Ratio tests For general composite hypotheses optimality theory is not usually successful in producing an optimal test. instead we look for heuristics to guide our choices. The simplest approach
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationP n. This is called the law of large numbers but it comes in two forms: Strong and Weak.
Large Sample Theory Large Sample Theory is a name given to the search for approximations to the behaviour of statistical procedures which are derived by computing limits as the sample size, n, tends to
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationLecture 6: Hypothesis Testing
Lecture 6: Hypothesis Testing Mauricio Sarrias Universidad Católica del Norte November 6, 2017 1 Moran s I Statistic Mandatory Reading Moran s I based on Cliff and Ord (1972) Kelijan and Prucha (2001)
More informationFinal Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.
1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically
More informationGraduate Econometrics I: Maximum Likelihood I
Graduate Econometrics I: Maximum Likelihood I Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood
More informationSampling distribution of GLM regression coefficients
Sampling distribution of GLM regression coefficients Patrick Breheny February 5 Patrick Breheny BST 760: Advanced Regression 1/20 Introduction So far, we ve discussed the basic properties of the score,
More informationAdvanced Econometrics
Advanced Econometrics Dr. Andrea Beccarini Center for Quantitative Economics Winter 2013/2014 Andrea Beccarini (CQE) Econometrics Winter 2013/2014 1 / 156 General information Aims and prerequisites Objective:
More informationAsymptotics for Nonlinear GMM
Asymptotics for Nonlinear GMM Eric Zivot February 13, 2013 Asymptotic Properties of Nonlinear GMM Under standard regularity conditions (to be discussed later), it can be shown that where ˆθ(Ŵ) θ 0 ³ˆθ(Ŵ)
More informationMultiple Regression Analysis
Multiple Regression Analysis y = 0 + 1 x 1 + x +... k x k + u 6. Heteroskedasticity What is Heteroskedasticity?! Recall the assumption of homoskedasticity implied that conditional on the explanatory variables,
More informationLink lecture - Lagrange Multipliers
Link lecture - Lagrange Multipliers Lagrange multipliers provide a method for finding a stationary point of a function, say f(x, y) when the variables are subject to constraints, say of the form g(x, y)
More informationMEI Exam Review. June 7, 2002
MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 3 Jakub Mućk Econometrics of Panel Data Meeting # 3 1 / 21 Outline 1 Fixed or Random Hausman Test 2 Between Estimator 3 Coefficient of determination (R 2
More informationECONOMETRICS I. Assignment 5 Estimation
ECONOMETRICS I Professor William Greene Phone: 212.998.0876 Office: KMC 7-90 Home page: people.stern.nyu.edu/wgreene Email: wgreene@stern.nyu.edu Course web page: people.stern.nyu.edu/wgreene/econometrics/econometrics.htm
More informationMax. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes
Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter
More informationMachine Learning 2017
Machine Learning 2017 Volker Roth Department of Mathematics & Computer Science University of Basel 21st March 2017 Volker Roth (University of Basel) Machine Learning 2017 21st March 2017 1 / 41 Section
More information2014/2015 Smester II ST5224 Final Exam Solution
014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationStatistical Estimation
Statistical Estimation Use data and a model. The plug-in estimators are based on the simple principle of applying the defining functional to the ECDF. Other methods of estimation: minimize residuals from
More informationEconomics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models
University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe
More informationECON 4160, Autumn term Lecture 1
ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least
More informationSTAT5044: Regression and Anova
STAT5044: Regression and Anova Inyoung Kim 1 / 15 Outline 1 Fitting GLMs 2 / 15 Fitting GLMS We study how to find the maxlimum likelihood estimator ˆβ of GLM parameters The likelihood equaions are usually
More informationChapter 7. Hypothesis Testing
Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function
More informationA note on profile likelihood for exponential tilt mixture models
Biometrika (2009), 96, 1,pp. 229 236 C 2009 Biometrika Trust Printed in Great Britain doi: 10.1093/biomet/asn059 Advance Access publication 22 January 2009 A note on profile likelihood for exponential
More informationLinear Methods for Prediction
Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we
More informationLecture 3 September 1
STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have
More informationDA Freedman Notes on the MLE Fall 2003
DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationAN EMPIRICAL LIKELIHOOD RATIO TEST FOR NORMALITY
Econometrics Working Paper EWP0401 ISSN 1485-6441 Department of Economics AN EMPIRICAL LIKELIHOOD RATIO TEST FOR NORMALITY Lauren Bin Dong & David E. A. Giles Department of Economics, University of Victoria
More informationModelling Ireland s exchange rates: from EMS to EMU
From the SelectedWorks of Derek Bond November, 2007 Modelling Ireland s exchange rates: from EMS to EMU Derek Bond, University of Ulster Available at: https://works.bepress.com/derek_bond/15/ Background
More informationEconometrics II - EXAM Outline Solutions All questions have 25pts Answer each question in separate sheets
Econometrics II - EXAM Outline Solutions All questions hae 5pts Answer each question in separate sheets. Consider the two linear simultaneous equations G with two exogeneous ariables K, y γ + y γ + x δ
More informationCHAPTER 1: BINARY LOGIT MODEL
CHAPTER 1: BINARY LOGIT MODEL Prof. Alan Wan 1 / 44 Table of contents 1. Introduction 1.1 Dichotomous dependent variables 1.2 Problems with OLS 3.3.1 SAS codes and basic outputs 3.3.2 Wald test for individual
More informationChapter 4. Theory of Tests. 4.1 Introduction
Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule
More informationZellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley
Zellner s Seemingly Unrelated Regressions Model James L. Powell Department of Economics University of California, Berkeley Overview The seemingly unrelated regressions (SUR) model, proposed by Zellner,
More informationStatistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That
Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval
More informationAssociation studies and regression
Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration
More information1.5 Testing and Model Selection
1.5 Testing and Model Selection The EViews output for least squares, probit and logit includes some statistics relevant to testing hypotheses (e.g. Likelihood Ratio statistic) and to choosing between specifications
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationMaximum-Likelihood Estimation: Basic Ideas
Sociology 740 John Fox Lecture Notes Maximum-Likelihood Estimation: Basic Ideas Copyright 2014 by John Fox Maximum-Likelihood Estimation: Basic Ideas 1 I The method of maximum likelihood provides estimators
More informationFor iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions.
Large Sample Theory Study approximate behaviour of ˆθ by studying the function U. Notice U is sum of independent random variables. Theorem: If Y 1, Y 2,... are iid with mean µ then Yi n µ Called law of
More informationPh.D. Qualifying Exam Friday Saturday, January 3 4, 2014
Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna November 23, 2013 Outline Introduction
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationLikelihood-based inference with missing data under missing-at-random
Likelihood-based inference with missing data under missing-at-random Jae-kwang Kim Joint work with Shu Yang Department of Statistics, Iowa State University May 4, 014 Outline 1. Introduction. Parametric
More informationA Primer on Asymptotics
A Primer on Asymptotics Eric Zivot Department of Economics University of Washington September 30, 2003 Revised: October 7, 2009 Introduction The two main concepts in asymptotic theory covered in these
More informationTesting and Model Selection
Testing and Model Selection This is another digression on general statistics: see PE App C.8.4. The EViews output for least squares, probit and logit includes some statistics relevant to testing hypotheses
More informationGeneralized Linear Models
Generalized Linear Models Lecture 3. Hypothesis testing. Goodness of Fit. Model diagnostics GLM (Spring, 2018) Lecture 3 1 / 34 Models Let M(X r ) be a model with design matrix X r (with r columns) r n
More informationStatistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach
Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score
More informationAnswers to Problem Set #4
Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2
More informationLoglikelihood and Confidence Intervals
Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,
More informationEmpirical Likelihood
Empirical Likelihood Patrick Breheny September 20 Patrick Breheny STA 621: Nonparametric Statistics 1/15 Introduction Empirical likelihood We will discuss one final approach to constructing confidence
More informationFunctional Form. Econometrics. ADEi.
Functional Form Econometrics. ADEi. 1. Introduction We have employed the linear function in our model specification. Why? It is simple and has good mathematical properties. It could be reasonable approximation,
More informationECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria
ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria SOLUTION TO FINAL EXAM Friday, April 12, 2013. From 9:00-12:00 (3 hours) INSTRUCTIONS:
More informationInference about the Indirect Effect: a Likelihood Approach
Discussion Paper: 2014/10 Inference about the Indirect Effect: a Likelihood Approach Noud P.A. van Giersbergen www.ase.uva.nl/uva-econometrics Amsterdam School of Economics Department of Economics & Econometrics
More informationModels, Testing, and Correction of Heteroskedasticity. James L. Powell Department of Economics University of California, Berkeley
Models, Testing, and Correction of Heteroskedasticity James L. Powell Department of Economics University of California, Berkeley Aitken s GLS and Weighted LS The Generalized Classical Regression Model
More informationNonconcave Penalized Likelihood with A Diverging Number of Parameters
Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized
More informationECE 275A Homework 7 Solutions
ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator
More information