ECON 4160, Autumn term Lecture 1

Size: px
Start display at page:

Download "ECON 4160, Autumn term Lecture 1"

Transcription

1 ECON 4160, Autumn term Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August / 54

2 Principles of inference I Ordinary least squares (OLS) and the Method of Moments (MM) are classical principles of estimation and inference They lead to estimators, t-values and F-statistics that also are random variables. with known properties given the assumptions of the statistical model. In econometrics, the assumptions about the statistical model often appear in the disturbance term of a regression equation. In this Lecture we motivate another classic called Maximum Likelihood Estimation (MLE) And the Likelihood Ratio (LR) test principle. 2 / 54

3 Principles of inference II MLE and LR are introduced with the aid of models from basic statistics/economerics courses. The second main topic is the introduction of the bivariate normal model, that will be a main reference for the rest of the course. 3 / 54

4 References for Lecture 1 Background knowledge at the level of our Review of key concepts note. HN: Ch 1 and 2 (Bernoulli), Ch 3 (First regresssion model/location model), Ch 5 (2-variable regresion), Ch 4 (Logit model/logistic regression) Ch 10.2 and 10.2 (Bivariate normal model) BN(2011): Ch. 4.5,4.6, 4.8 and Ch 4.A; Ch , Ch 6.1, / 54

5 Ch 1 and 2 in HN Bernoulli Model I Assume that the purpose is to estimate the probability of girl or boy of newborn children. That population probability is the parameter of interest. To be of any relevance the statistical model must contain the parameter of interest (directly or as a derived parameter). The Bernoulli model meets that requirement. Let Y i denote the sex of child i and consider n random variables {Y 1,Y 2,...,Y n }. The model assumptions are: 1. Independence: Y 1,Y 2,...,Y n are mutually statistically independent; 2. Identical distribution: All children are drawn from the same distribution; 5 / 54

6 Ch 1 and 2 in HN Bernoulli Model II 3. Bernoulli distribution: Y i D = Bernoulli[θ]; θ = P(Yi = 1), where 1 indicates success (a girl). 4. Parameter space: 0 < θ < 1. The book also writes θ Θ = (0, 1). The parameter space symbol is Θ. The parameter of interest is θ. 6 / 54

7 Ch 1 and 2 in HN The likelihood function of the Bernoulli model I How probable are different outcomes y 1,y 2,...,y n of Y 1,Y 2,...,Y n for different values of θ? To answer this question we use the model specified by assumptions Write the probability density function (pdf) for the Bernoulli distribution as: hence: f θ (y i ) = θ y i (1 θ) (1 y i ), for y i = 1 or 0. P(Y i = 1) = f θ (1) = θ 1 (1 θ) 0 = θ P(Y i = 0) = f θ (0) = θ 0 (1 θ) 1 = 1 θ. 7 / 54

8 Ch 1 and 2 in HN The likelihood function of the Bernoulli model II With n = 2, the joint pdf becomes: f θ (y 1, y 2 ) = [θ ] [ ] y 1 (1 θ) (1 y 1) θ y 2 (1 θ) (1 y 2) = 2 i=1 θ y i (1 θ) (1 y i ) 8 / 54

9 Ch 1 and 2 in HN The likelihood function of the Bernoulli model III And in general, f θ (y 1,y 2,...,y n ): f θ (y 1, y 2,..., y n ) = = n f θ (y i ) = i=1 n θ y n i i=1 i=1 n i=1 θ y i (1 θ) (1 y i ) (1 θ) (1 y i ) = θ n i=1 y i (1 θ) n i=1 (1 y i ) = θ nȳ (1 θ) n(1 ȳ ) = {θȳ (1 θ) (1 ȳ )} n For known θ, we can calculate the joint density f θ (y 1, y 2,..., y n ) for any value of the average ȳ = n 1 n i=1y i. 9 / 54

10 Ch 1 and 2 in HN The likelihood function of the Bernoulli model IV When we use Maximum Likelihood Estimation (MLE), the premises are turned around: The aim is to find the most likely value of θ for the outcome of the random variables Y 1,Y 2,...,Y n that we can observe. Define the Likelihood-function: L Y1,Y 2,...,Y n (θ) = f θ (Y 1, Y 2,..., Y n ) = {θȳ (1 θ) (1 Ȳ )} n (1) Y i has replaced y i to indicate that the likelihood function is based on the random variables that represent the data In (1), θ is the argument of the function, n and Ȳ are the fixed parameters. 10 / 54

11 Ch 1 and 2 in HN The likelihood function of the Bernoulli model V Since the likelihood only depends on the random variables throughȳ, we say that Ȳ is a suffi cient statistic for θ. 11 / 54

12 Ch 1 and 2 in HN The log likelihood function of the Bernoulli model I The natural logarithm (1) is called the log-likelihood function l Y1,Y 2,...,Y n (θ) = ln [{θȳ (1 θ) (1 Ȳ )} n] (1 Ȳ = n ln {θȳ (1 θ) )} (2) = n [Ȳ ln(θ) + (1 Ȳ ) ln(1 θ)] (3) The MLE estimator ˆθ is given by the first order condition for a maximum of the log-likelihood function: (Ȳ n ˆθ 1 Ȳ ) = 0 (4) 1 ˆθ 12 / 54

13 Ch 1 and 2 in HN The log likelihood function of the Bernoulli model II We obtain the MLE of θ as: ˆθ = Ȳ. (5) DIY exercise 1.1: Why is it true that this ˆθ achieves maximum, and not minimum? 13 / 54

14 Ch 1 and 2 in HN Example: Statistics Norway, data for 2016: girl babies, and boys. Hence: ˆθ = = (6) very close to the estimate for UK in 2004 according to HN p. 10. The maximum value of the log-likelihood function: max l Y1,Y 2,...,Y n ( ˆθ) = θ Θ U = [ ln( ) + ( ) ln( )] = / 54

15 Ch 1 and 2 in HN ˆθ in (5) is the unrestricted MLE of θ, obtained from the unrestricted model, where θ is maximized over the whole parameter space that defines the model, we refer to it as Θ U = (0, 1). If the parameter space of θ is restricted we say that we analyse the restricted model. If the parameter space is restricted to a single point, e.g., Θ R = {0.5} the restricted log likelihood is: max l Y1,Y 2,...,Y n (0.5) = ln [{0.5Ȳ (1 0.5) (1 Ȳ )} n] θ Θ R (7) 15 / 54

16 Ch 1 and 2 in HN HN define the log-likelihood ratio test statistic: LR = 2 log(max θ Θ R l Y1,Y 2,...,Y n (θ) max θ ΘU l Y 1,Y 2,...,Y n (θ)) (8) = 2 log( max θ Θ U l Y1,Y 2,...,Y n (θ) max θ Θ R l Y1,Y 2,...,Y n (θ)) which takes non-negative values, LR 0. The closer LR is to zero, the more likely it is that the restricted model is acceptable, and that H 0 cannot be rejected. 16 / 54

17 Ch 1 and 2 in HN Inference in the Bernoulli model I Several of you will know the Bernoulli distribution as the binominal distribution. You will also remember that a natural test-statistic to use for the hypothesis test situation: H 0 : θ = θ 0 against H 0 : θ < θ 0 is Z = ˆθ θ 0 θ o (1 θ 0 ) n, 17 / 54

18 Ch 1 and 2 in HN Inference in the Bernoulli model II and that if nθ(1 θ) > 10 the distribution of Z is well approximated by the standard normal distribution N(0, 1), (de Moivres s theorem). Hence to test: Z D N(0, 1) H 0 : θ = 0.5 against θ < 0.5 with the Norwegian 2016-data we check how probable it is to observe = in a N(0, 1) distribution. We get: P(Z > 7.75) = 0.000, which is the p-value of the test. 18 / 54

19 Ch 1 and 2 in HN Inference in the Bernoulli model III Hence we can choose the significance level of the test (Type-I error probability) as low as we want, and still formally reject H 0. How do we relate this to the log-likelihood ratio test statistic.? A lengthy derivation in HN, that we do not need to follow in any detail, shows that : 2 ˆθ U θ R LR D θ R (1 θ R ) n which reads LR is asymptotically distributed as the square of the centred and standardized random variable ˆθ. But this is identical to Z above, since we have only added subscripts U and R for the Unrestricted and Restricted log-likelihoods. (9) 19 / 54

20 Ch 1 and 2 in HN Inference in the Bernoulli model IV Hence: ˆθ U θ R θ R (1 θ R ) n and using a well known result: LR D χ 2 (1), D N(0, 1) (10) the log-likelihood rate test statistic has an approximate distribution which is Chi-squared with one degree of freedom. 20 / 54

21 Ch 1 and 2 in HN Inference in the Bernoulli model V To perform the test on the Norwegian data, we use (8) with numbers: max l Y1,Y 2,...,Y n (θ) = θ Θ U max l Y1,Y 2,...,Y n (θ) = θ Θ R LR = 2 ( ( 40850)) = 122, 0 and the p-value is zero for all practical purposes, P(LR > 122) = / 54

22 Ch 1 and 2 in HN Summing up Maximum likelihood on the Bernoulli model (Chapter 1 and 2 in HN) I A statistical model is relevant if it contains a parameter of interest. The statistical model makes it possible for us to calculate the maximized value of the log-likelihood function. And the maximum likelihood estimator (MLE) of the parameter of interest. The restricted and unrestricted log-likelihood can be used to construct a Chi-squared distributed test statistic: The (log) Likelihood-ratio test. 22 / 54

23 Ch 1 and 2 in HN About the exercises to Ch 1 and 2 in HN I Ch 1: Since we use the B-model mainly to introduce MLE, maybe not dwell too much on the exercises here. But Exercise 1.2 might be fun to work with. Ch 2: Same remark, but Exercise 2.3 is useful! Many of the other theoretical exercises are either already covered by the warm-up question set, or seem to be too intricate (Exercise. 2.4, 2.9 and 2.11), unless you are particularly interested. 23 / 54

24 Purpose: Estimating the location parameter I Ch. 3 in HN ( A first regression model ) The statistical model is: 1. Independence: Y 1,Y 2,...,Y n are mutually statistically independent; 2. Identical distribution: Y 1,..., Y n have identical distributions; 3. Normal distribution: Y i D = N(β, σ 2 );. 4. Parameter space: β is a real number ( R) and σ 2 > 0; It is usual to call β a location parameter, and σ 2 a scale parameter, cf. the Figure 3.2 and the discussion on page 32 in HN. Unlike Ch. 1 and 2, we now have model equation, namely: Y i = β + ε i (11) 24 / 54

25 Purpose: Estimating the location parameter II DIY exercise 1.2: What are the properties of the random variable ε i in (11)? 25 / 54

26 The likelihood function and ML estimators I With exactly the same motivation as for the Bernoulli model, we want to write down the likelihood function of Y 1,...,Y n. But for a different statistical distribution! DIY exercise 1.3: Follow the steps on page 35 in HN to convince yourself that the log-likelihood function is l Y1,Y 2,...,Y n (β, σ 2 ) = n 2 log(2πσ2 ) 1 2σ 2 n i=1 (Y i β) 2, π 3.14 In this case, the likelihood can be maximized in two steps. First with respect to β 0 and second wrt to σ 2. (12) 26 / 54

27 The likelihood function and ML estimators II From the 1oc of (12) wrt to β, which is the same as the 1oc for the minimum of the sum of squared deviations we obtain the MLE of β ˆβ = Ȳ (13) which is identical to the OLS estimator (and Method of moments estimator.. The concentrated likelihood for σ 2 is l Y1,Y 2,...,Y n ( ˆβ, σ 2 ) = n 2 log(2πσ2 ) 1 n 2 2σ 2 ˆε i i=1 (14) where ˆε i = Y i ˆβ 27 / 54

28 The likelihood function and ML estimators III From the 1oc we get: ˆσ 2 = 1 n which is the MLE of the scale parameter σ 2. n ˆε 2 i = ˆε 2 (15) i=1 28 / 54

29 Inference about the location parameter I Read 3.4 in HN to review your knowledge about inference about β. Specifically: Approximate inference, using the standard normal distribution with referene to CLT. Exact inference: Keeping Y i D = N(β, σ 2 ): Inference by the t(n 1) distribution. Robustness ( 3.4.3). The reliability of the inference depends on how valid and relevant the assumptions of the model are for the data, in this case log transformed wage data for US individuals. 29 / 54

30 LR test for the location parameter I US wage data in HN Ch 3 as example. Denote the value of β that we have an hypothesis about: β R. And the MLE estimate ˆβ U The expression for the LR test of the hypothesis represented by β R : LR = 2 log( max θ Θ U l Y1,Y 2,...,Y n (β, σ 2 ) max θ Θ R l Y1,Y 2,...,Y n (β, σ 2 )) = 1 2σ 2 = 1 2σ 2 n i=1 ( n (Y i ˆβ U ) 2 ( 1 2σ 2 i=1 (Y i ˆβ U ) 2 n i=1 n i=1 (Y i β R ) 2 ) (Y i β R ) 2 ) 30 / 54

31 LR test for the location parameter II US wage data in HN Ch 3 as example. To be operational we need to decide which number to use for σ 2. Suggest to use the unrestricted estimate. From the data set, we obtain: n i=1 n i=1 (Y i 4) 2 = (Y i ˆβ U ) 2 = n i=1 σ 2 U = (0.7531) 2 (Y i 5.02) 2 = / 54

32 LR test for the location parameter III US wage data in HN Ch 3 as example. ( LR = 2 = (0.7531) with p-value = 0 in the χ 2 (1)-distribution. ) ( ) 2 32 / 54

33 About exercises to Ch 3 in HN I Unless you are very interested in the statistical theory, Exercise 3.4, 3.5 and 3.12 can be dropped Exercise 3.3: Note that this is exercise extends the model by one deterministic X-variable, so it is the model with deterministic regressor that many of you will have seen. Exercise 3.9: Take note about the point about weaker assumptions, even if you do not want to work with the proofs. 33 / 54

34 Partial modelling I Ch 5 in HN We can say that in the location model, Y is regressed on a constant. But the true nature of regression, as a partial model of the statistical system consisting of Y and X, of course requires that there is random variation also in X. When we specify a regression model, the purpose of the investigation is to estimate the parameters of the conditional density of one of the variables given the other. 34 / 54

35 Partial modelling II We say that our parameters of interest are in the conditional distribution of Y given X, even though the population distribution given by a joint probability density function f (x i, y i ): f (x i, y i ) }{{} = f (y i x i ) }{{} f (x i ) }{{} simultaneous pdf conditional pdf marginal pdf (16) Since f (y i x i ) is a perfectly valid statistical distribution, it is clearly allowed to focus only on f (y i x i ), and leaving f (x i ), the marginal pdf, unspecified, at least to begin with. Modelling f (y i x i ) is clearly a partial model compared to a full model of f (x i, y i ), which implies a multi-equation model. 35 / 54

36 Partial modelling III DIY exercise 1.4: Define independence between Y and X. What does independence imply for f (y i x i )? What about f (x i y i )? The expectation associated with the probability density distribution f (y i x i ) is the conditional expectation of Y given x. When the distribution of the system, i.e. f (x i, y i ), is bivariate normal, it is a fact that f (y i x i ) is a normal pdf and that the conditional expectation is a linear function of X. 36 / 54

37 Regression model specification I The statistical model is: 1. Independence: the pairs (Y 1,X 1 ), (Y 2, X 2 ),...,(Y n,x n ) are mutually independent;; 2. Conditional normality: Y i D = N(β1 + β 2 X i, σ 2 );. 3. Exogeneity: The conditioning variable X i is exogenous; 4. Parameter space: β 1, β 2 R 2 and σ 2 > 0; A central new concept here: Exogeneity: Heuristically: If we can estimate the parameters of interest without representing (or estimating) the marginal distribution, we say that the explanatory variable is exogenous. Later we will distinguish between weak and strong exogeneity, and we will also introduce super-exogeneity. 37 / 54

38 Regression model specification II But for the time being, we just concentrate on the intuitive idea that exogeneity represents the case where we do no throw away any useful information by only considering the conditional distribution. The model equation in for the regression model is where Y i = β 1 + β 2 X i + ε i (17) ε i def = Y i E (Y i X i ) (18) DIY exercise 1.5: What are the statistical properties of ε i? 38 / 54

39 MLE of the Gaussian regression model I The likelihood function is construction in the same manner as in the location model, because we condition on the X -values. The expression (12) is replaced by l Y1,Y 2,...,Y n (β 1, β 2, σ 2 ) = n 2 log(2πσ2 ) 1 2σ 2 n i=1 (Y i β 1 β 2 X i ) 2 (19) meaning that we find the MLEs for β 1 and β 2 by minimizing the second sum of squares. What does that imply for equivalence betwen ML and OLS estimators of β 1 and β 2? The MLE for σ 2 is ˆε 2 = 1/n n i=1 ˆε2 i, with ˆε i = Y i ˆβ 1 ˆβ 2 X i. 39 / 54

40 MLE of the Gaussian regression model II As you will be aware of, MLE for σ 2 is biased. The unbiased estimator replaces 1/n by 1/(n 2) 40 / 54

41 Reparameterization of models I In Chap 5.2 makes the point that you know (from elementary econometrics), that the OLS estimators using the original data requires the solution of two simultaneous equations in the two unknown ˆβ 1 and ˆβ 2. A well known trick is to add and subtract β 2 X on the right hand side of the model equation (17). This gives a re-parameterized model where the slope parameter is the same as in (17), but with a new intercept, denoted δ 1 in HN.The 1oc for ML/OLS for this model are two equations that can be solved separately for δˆ 1 and ˆβ 2, see Lecture note 1 for a more direct argument than the one in HN / 54

42 Reparameterization of models II The two estimators ˆδ and ˆ β 2 are uncorrelated ( orthogonalized ), while the estimators βˆ 1 and βˆ 2 are correlated. In the rest of the course we refer to a re-parameterization as an operation on the model equation that changes the parameters without changing the statistical properties of the disturbance. Orthogonalization of variables means to replace the original variables in a model by variable that are uncorrelated. Orthogonalization can sometimes be attained by reparameterization but, will often affect the properties of the disturbance, and therefore the statistical model. Care must be taken! 42 / 54

43 Inference in the simple regression model I All you know about estimation and testing holds as before! But we have a new interpretation of the OLS estimators ˆ β 1 and ˆβ 2 : They are ML-estimators under the assumption of the Gaussian regression model above. And in addition to t-value, we can use the LR-test statistic. 43 / 54

44 US wage data example I Let Y represent log-wage as above, and let X be the length of schooling (the variable educ in the data set on the HN-website). OLS estimation gives: ˆβ 1 = ˆβ 2 = ˆσ 2 = ( ) 2 with βˆ 1 and βˆ 2 as unrestricted ML estimates (and where we have used the unbiased estimate for σ 2. The LR test for the hypothesis that β 2 = β 2R = 0: ( LR = 2 = ( ) ) ( ) 2 44 / 54

45 US wage data example II Confirming what the t-value of the regression output would show: That educ is a significant regressor. 45 / 54

46 The logit model I Ch 4 in HN Not a very central model in our course. But included here since it illustrates that when: the parameter of interest is the probability of a dichotomous dependent variable: Y = 1(have job) Y = 0 (no job) And the success probability varies p varies with X, the log-likelihood function is well defined, and it is given in of HN, But unlike the other models in this lecture, the first order conditions have no analytical solutions. MLE is then obtained by numerical maximization, which is however easily done with good statitical software. 46 / 54

47 Variables and systems The inclusion of exogeneity of X i in the assumptions of the regression model is a sign that even when we are working with single-equation models, we need system thinking. We now introduce the binormal model as a framework for treating the system perspective in a statistical context. In HN, the bivariate normal model is in 10.2, which can be read while still bypassing the rest of Chapter / 54

48 The binormal distribution I We can indicate that the pair of random variables (Y i,x i ) has a bivariate normal distribution by writing: ( Yi X i ) N ( µy µ X } {{ } µ ) ( σ 2, Y σ XY σ XY σx 2 }{{} Σ ) where µ and is the vector with the expectations E (Y i ) = µ Y and E (X i ) = µ XY as elements. Σ is the covariance matrix, with the variances along the principal diagonal, and the covariance outside the diagonal. (20) 48 / 54

49 The binormal distribution II The joint pdf is found as equation (10.2.1) in HN (with slightly different symbols). The pdf is defined for non-negative variances, and for 1 < ρ XY < 1 (21) where ρ XY is the population correlation coeffi cient: ρ XY = σ XY σ 2 X σ2 Y = σ XY σ X σ Y where σ x and σ y are the two standard deviations. 49 / 54

50 The binormal distribution III A property of the binormal distribution is that the conditional distribution of Y i given X i is normal With parameters: (Y i X i ) D = N(µ Yi X i, σ 2 ) µ Y X = E (Y i X i ) = β 1 + β 2 X i (22) β 1 = µ Y β 2 µ X (23) β 2 = σ XY σx 2 (24) σ 2 = Var(Y i X i ) = σy 2 (1 ρ 2 XY ) (25) 50 / 54

51 The bivariate normal model I We can write the binormal distribution in terms of model equations by defining new variables ( εyi ε Xi ) def = ( Yi µ Y X i µ X ) ( ) ( ) 0 σ 2 N, Y σ XY 0 σ XY σ 2 X }{{}}{{} 0 Σ giving the model in terms of two marginal model equations: ( Yi = µ Y + ε Yi, ε Yi N(0, σ 2 Y ) and X i = µ X + ε Xi, ε Xi N(0, σ 2 X ) ) Cov(ε Yi, ε Xi ) = σ XY 51 / 54

52 The bivariate normal model II We can represent the same statistical system by first defining a new random variable ɛ i : ɛ i def = Y i µ Yi X i (26) and then formulating a 2-variable model in terms of a conditional equation and a marginal equation: Y i = β 1 + β 2 X i + ɛ i, ɛ i N(0, σ 2 ) (27) X i = µ X + ε Xi, ε Xi N(0, σ 2 X ) (28) and Cov(ɛ i, ε Xi ) = 0 (29) 52 / 54

53 The bivariate normal model III Intuitively (29) must be true, because ɛ i is what remains after we have extracted all correlation between ε Xi and Y i by conditioning on X i (and ε Xi is one-for-one with X i ). A formal argument: Note first that: E (ɛ i X i ) = E (Y i E (Y i X i ) X i ) = E (Y i X i ) E (Y i X i ) = 0 and that the marginal model for X i implies that E (ɛ i X i ) E (ɛ i ε Xi ). Finally, therefore: showing (29). E (ɛ i ε Xi ) = 0 Cov(ɛ i, ε Xi ) = 0 53 / 54

54 The bivariate normal model IV Extremely important take away: The error-term of the conditional model is uncorrelated with the disturbance of the marginal model and with the regressor. Note that in the formal argument, we have not used the properties of the binormal distribution. In fact, Cov(ɛ i, ε Xi ) = 0 is true for any conditional/marginal model, as long as the conditional expectation is correctly specified in the light of the system. What then about omitted variables bias, which we know is due to Cov(ɛ i, X i ) = 0. Where do those omitted variables come from? 54 / 54

The regression model with one fixed regressor cont d

The regression model with one fixed regressor cont d The regression model with one fixed regressor cont d 3150/4150 Lecture 4 Ragnar Nymoen 27 January 2012 The model with transformed variables Regression with transformed variables I References HGL Ch 2.8

More information

The regression model with one stochastic regressor (part II)

The regression model with one stochastic regressor (part II) The regression model with one stochastic regressor (part II) 3150/4150 Lecture 7 Ragnar Nymoen 6 Feb 2012 We will finish Lecture topic 4: The regression model with stochastic regressor We will first look

More information

ECON 3150/4150, Spring term Lecture 6

ECON 3150/4150, Spring term Lecture 6 ECON 3150/4150, Spring term 2013. Lecture 6 Review of theoretical statistics for econometric modelling (II) Ragnar Nymoen University of Oslo 31 January 2013 1 / 25 References to Lecture 3 and 6 Lecture

More information

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2) RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made

More information

The regression model with one stochastic regressor.

The regression model with one stochastic regressor. The regression model with one stochastic regressor. 3150/4150 Lecture 6 Ragnar Nymoen 30 January 2012 We are now on Lecture topic 4 The main goal in this lecture is to extend the results of the regression

More information

ECON 4160, Lecture 11 and 12

ECON 4160, Lecture 11 and 12 ECON 4160, 2016. Lecture 11 and 12 Co-integration Ragnar Nymoen Department of Economics 9 November 2017 1 / 43 Introduction I So far we have considered: Stationary VAR ( no unit roots ) Standard inference

More information

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,

More information

Reliability of inference (1 of 2 lectures)

Reliability of inference (1 of 2 lectures) Reliability of inference (1 of 2 lectures) Ragnar Nymoen University of Oslo 5 March 2013 1 / 19 This lecture (#13 and 14): I The optimality of the OLS estimators and tests depend on the assumptions of

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao

More information

Gibbs Sampling in Endogenous Variables Models

Gibbs Sampling in Endogenous Variables Models Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take

More information

ECON 3150/4150, Spring term Lecture 7

ECON 3150/4150, Spring term Lecture 7 ECON 3150/4150, Spring term 2014. Lecture 7 The multivariate regression model (I) Ragnar Nymoen University of Oslo 4 February 2014 1 / 23 References to Lecture 7 and 8 SW Ch. 6 BN Kap 7.1-7.8 2 / 23 Omitted

More information

ECON 4160, Spring term 2015 Lecture 7

ECON 4160, Spring term 2015 Lecture 7 ECON 4160, Spring term 2015 Lecture 7 Identification and estimation of SEMs (Part 1) Ragnar Nymoen Department of Economics 8 Oct 2015 1 / 55 HN Ch 15 References to Davidson and MacKinnon, Ch 8.1-8.5 Ch

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

ECON 4160, Autumn term 2017 Lecture 9

ECON 4160, Autumn term 2017 Lecture 9 ECON 4160, Autumn term 2017 Lecture 9 Structural VAR (SVAR) Ragnar Nymoen Department of Economics 26 Oct 2017 1 / 14 Parameter of interest: impulse responses I Consider again a partial market equilibrium

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models Ragnar Nymoen Department of Economics University of Oslo 25 September 2018 The reference to this lecture is: Chapter

More information

ECON3150/4150 Spring 2015

ECON3150/4150 Spring 2015 ECON3150/4150 Spring 2015 Lecture 3&4 - The linear regression model Siv-Elisabeth Skjelbred University of Oslo January 29, 2015 1 / 67 Chapter 4 in S&W Section 17.1 in S&W (extended OLS assumptions) 2

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test E 4160 Autumn term 2016. Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test Ragnar Nymoen Department of Economics, University of Oslo 24 October

More information

Bias Variance Trade-off

Bias Variance Trade-off Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Postponed exam: ECON4160 Econometrics Modeling and systems estimation Date of exam: Wednesday, January 8, 2014 Time for exam: 09:00 a.m. 12:00 noon The problem

More information

Association studies and regression

Association studies and regression Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

Reference: Davidson and MacKinnon Ch 2. In particular page

Reference: Davidson and MacKinnon Ch 2. In particular page RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy

More information

Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression

Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression BSTT523: Kutner et al., Chapter 1 1 Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression Introduction: Functional relation between

More information

Review of Econometrics

Review of Econometrics Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Exam: ECON3150/ECON4150 Introductory Econometrics Date of exam: Wednesday, May 15, 013 Grades are given: June 6, 013 Time for exam: :30 p.m. 5:30 p.m. The problem

More information

WISE International Masters

WISE International Masters WISE International Masters ECONOMETRICS Instructor: Brett Graham INSTRUCTIONS TO STUDENTS 1 The time allowed for this examination paper is 2 hours. 2 This examination paper contains 32 questions. You are

More information

Introductory Econometrics

Introductory Econometrics Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna November 23, 2013 Outline Introduction

More information

Econometrics Summary Algebraic and Statistical Preliminaries

Econometrics Summary Algebraic and Statistical Preliminaries Econometrics Summary Algebraic and Statistical Preliminaries Elasticity: The point elasticity of Y with respect to L is given by α = ( Y/ L)/(Y/L). The arc elasticity is given by ( Y/ L)/(Y/L), when L

More information

ECON The Simple Regression Model

ECON The Simple Regression Model ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In

More information

Confidence Intervals, Testing and ANOVA Summary

Confidence Intervals, Testing and ANOVA Summary Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0

More information

For more information about how to cite these materials visit

For more information about how to cite these materials visit Author(s): Kerby Shedden, Ph.D., 2010 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Share Alike 3.0 License: http://creativecommons.org/licenses/by-sa/3.0/

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives

More information

Econometrics. Week 8. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 8. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 8 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 25 Recommended Reading For the today Instrumental Variables Estimation and Two Stage

More information

ECON 3150/4150 spring term 2014: Exercise set for the first seminar and DIY exercises for the first few weeks of the course

ECON 3150/4150 spring term 2014: Exercise set for the first seminar and DIY exercises for the first few weeks of the course ECON 3150/4150 spring term 2014: Exercise set for the first seminar and DIY exercises for the first few weeks of the course Ragnar Nymoen 13 January 2014. Exercise set to seminar 1 (week 6, 3-7 Feb) This

More information

Review of Statistics

Review of Statistics Review of Statistics Topics Descriptive Statistics Mean, Variance Probability Union event, joint event Random Variables Discrete and Continuous Distributions, Moments Two Random Variables Covariance and

More information

Lecture 7: Dynamic panel models 2

Lecture 7: Dynamic panel models 2 Lecture 7: Dynamic panel models 2 Ragnar Nymoen Department of Economics, UiO 25 February 2010 Main issues and references The Arellano and Bond method for GMM estimation of dynamic panel data models A stepwise

More information

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y

More information

ECON 4160, Spring term Lecture 12

ECON 4160, Spring term Lecture 12 ECON 4160, Spring term 2013. Lecture 12 Non-stationarity and co-integration 2/2 Ragnar Nymoen Department of Economics 13 Nov 2013 1 / 53 Introduction I So far we have considered: Stationary VAR, with deterministic

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline

More information

Topic 12 Overview of Estimation

Topic 12 Overview of Estimation Topic 12 Overview of Estimation Classical Statistics 1 / 9 Outline Introduction Parameter Estimation Classical Statistics Densities and Likelihoods 2 / 9 Introduction In the simplest possible terms, the

More information

Linear Regression. Junhui Qian. October 27, 2014

Linear Regression. Junhui Qian. October 27, 2014 Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency

More information

Lecture 5: Omitted Variables, Dummy Variables and Multicollinearity

Lecture 5: Omitted Variables, Dummy Variables and Multicollinearity Lecture 5: Omitted Variables, Dummy Variables and Multicollinearity R.G. Pierse 1 Omitted Variables Suppose that the true model is Y i β 1 + β X i + β 3 X 3i + u i, i 1,, n (1.1) where β 3 0 but that the

More information

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1. Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error

More information

Linear Models in Econometrics

Linear Models in Econometrics Linear Models in Econometrics Nicky Grant At the most fundamental level econometrics is the development of statistical techniques suited primarily to answering economic questions and testing economic theories.

More information

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter

More information

E 4101/5101 Lecture 9: Non-stationarity

E 4101/5101 Lecture 9: Non-stationarity E 4101/5101 Lecture 9: Non-stationarity Ragnar Nymoen 30 March 2011 Introduction I Main references: Hamilton Ch 15,16 and 17. Davidson and MacKinnon Ch 14.3 and 14.4 Also read Ch 2.4 and Ch 2.5 in Davidson

More information

Hypothesis testing Goodness of fit Multicollinearity Prediction. Applied Statistics. Lecturer: Serena Arima

Hypothesis testing Goodness of fit Multicollinearity Prediction. Applied Statistics. Lecturer: Serena Arima Applied Statistics Lecturer: Serena Arima Hypothesis testing for the linear model Under the Gauss-Markov assumptions and the normality of the error terms, we saw that β N(β, σ 2 (X X ) 1 ) and hence s

More information

Introduction to Simple Linear Regression

Introduction to Simple Linear Regression Introduction to Simple Linear Regression Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68 About me Faculty in the Department

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

Lecture 3: Multiple Regression

Lecture 3: Multiple Regression Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u

More information

1/24/2008. Review of Statistical Inference. C.1 A Sample of Data. C.2 An Econometric Model. C.4 Estimating the Population Variance and Other Moments

1/24/2008. Review of Statistical Inference. C.1 A Sample of Data. C.2 An Econometric Model. C.4 Estimating the Population Variance and Other Moments /4/008 Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University C. A Sample of Data C. An Econometric Model C.3 Estimating the Mean of a Population C.4 Estimating the Population

More information

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 Minimize this by maximizing

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:

More information

Economics 241B Estimation with Instruments

Economics 241B Estimation with Instruments Economics 241B Estimation with Instruments Measurement Error Measurement error is de ned as the error resulting from the measurement of a variable. At some level, every variable is measured with error.

More information

ECO375 Tutorial 8 Instrumental Variables

ECO375 Tutorial 8 Instrumental Variables ECO375 Tutorial 8 Instrumental Variables Matt Tudball University of Toronto Mississauga November 16, 2017 Matt Tudball (University of Toronto) ECO375H5 November 16, 2017 1 / 22 Review: Endogeneity Instrumental

More information

Econometrics Master in Business and Quantitative Methods

Econometrics Master in Business and Quantitative Methods Econometrics Master in Business and Quantitative Methods Helena Veiga Universidad Carlos III de Madrid This chapter deals with truncation and censoring. Truncation occurs when the sample data are drawn

More information

The linear model is the most fundamental of all serious statistical models encompassing:

The linear model is the most fundamental of all serious statistical models encompassing: Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x

More information

WISE MA/PhD Programs Econometrics Instructor: Brett Graham Spring Semester, Academic Year Exam Version: A

WISE MA/PhD Programs Econometrics Instructor: Brett Graham Spring Semester, Academic Year Exam Version: A WISE MA/PhD Programs Econometrics Instructor: Brett Graham Spring Semester, 2016-17 Academic Year Exam Version: A INSTRUCTIONS TO STUDENTS 1 The time allowed for this examination paper is 2 hours. 2 This

More information

Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training

Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training Charles Elkan elkan@cs.ucsd.edu January 17, 2013 1 Principle of maximum likelihood Consider a family of probability distributions

More information

OSU Economics 444: Elementary Econometrics. Ch.10 Heteroskedasticity

OSU Economics 444: Elementary Econometrics. Ch.10 Heteroskedasticity OSU Economics 444: Elementary Econometrics Ch.0 Heteroskedasticity (Pure) heteroskedasticity is caused by the error term of a correctly speciþed equation: Var(² i )=σ 2 i, i =, 2,,n, i.e., the variance

More information

1. The OLS Estimator. 1.1 Population model and notation

1. The OLS Estimator. 1.1 Population model and notation 1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology

More information

Advanced Econometrics I

Advanced Econometrics I Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics

More information

Econometrics. Week 11. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 11. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 11 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 30 Recommended Reading For the today Advanced Time Series Topics Selected topics

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators

MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators Thilo Klein University of Cambridge Judge Business School Session 4: Linear regression,

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.

More information

The multiple regression model; Indicator variables as regressors

The multiple regression model; Indicator variables as regressors The multiple regression model; Indicator variables as regressors Ragnar Nymoen University of Oslo 28 February 2013 1 / 21 This lecture (#12): Based on the econometric model specification from Lecture 9

More information

Applied Econometrics (QEM)

Applied Econometrics (QEM) Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear

More information

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II Ragnar Nymoen Department of Economics University of Oslo 9 October 2018 The reference to this lecture is:

More information

Correlation and Regression

Correlation and Regression Correlation and Regression October 25, 2017 STAT 151 Class 9 Slide 1 Outline of Topics 1 Associations 2 Scatter plot 3 Correlation 4 Regression 5 Testing and estimation 6 Goodness-of-fit STAT 151 Class

More information

The Simple Linear Regression Model

The Simple Linear Regression Model The Simple Linear Regression Model Lesson 3 Ryan Safner 1 1 Department of Economics Hood College ECON 480 - Econometrics Fall 2017 Ryan Safner (Hood College) ECON 480 - Lesson 3 Fall 2017 1 / 77 Bivariate

More information

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 4 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 23 Recommended Reading For the today Serial correlation and heteroskedasticity in

More information

Advanced Quantitative Methods: maximum likelihood

Advanced Quantitative Methods: maximum likelihood Advanced Quantitative Methods: Maximum Likelihood University College Dublin 4 March 2014 1 2 3 4 5 6 Outline 1 2 3 4 5 6 of straight lines y = 1 2 x + 2 dy dx = 1 2 of curves y = x 2 4x + 5 of curves y

More information

Steps in Regression Analysis

Steps in Regression Analysis MGMG 522 : Session #2 Learning to Use Regression Analysis & The Classical Model (Ch. 3 & 4) 2-1 Steps in Regression Analysis 1. Review the literature and develop the theoretical model 2. Specify the model:

More information

Econometrics I Lecture 3: The Simple Linear Regression Model

Econometrics I Lecture 3: The Simple Linear Regression Model Econometrics I Lecture 3: The Simple Linear Regression Model Mohammad Vesal Graduate School of Management and Economics Sharif University of Technology 44716 Fall 1397 1 / 32 Outline Introduction Estimating

More information

WISE MA/PhD Programs Econometrics Instructor: Brett Graham Spring Semester, Academic Year Exam Version: A

WISE MA/PhD Programs Econometrics Instructor: Brett Graham Spring Semester, Academic Year Exam Version: A WISE MA/PhD Programs Econometrics Instructor: Brett Graham Spring Semester, 2016-17 Academic Year Exam Version: A INSTRUCTIONS TO STUDENTS 1 The time allowed for this examination paper is 2 hours. 2 This

More information

Chapter 3: Maximum Likelihood Theory

Chapter 3: Maximum Likelihood Theory Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood

More information

Panel Data. March 2, () Applied Economoetrics: Topic 6 March 2, / 43

Panel Data. March 2, () Applied Economoetrics: Topic 6 March 2, / 43 Panel Data March 2, 212 () Applied Economoetrics: Topic March 2, 212 1 / 43 Overview Many economic applications involve panel data. Panel data has both cross-sectional and time series aspects. Regression

More information

1 Motivation for Instrumental Variable (IV) Regression

1 Motivation for Instrumental Variable (IV) Regression ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data

More information

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11 Econometrics A Keio University, Faculty of Economics Simple linear model (2) Simon Clinet (Keio University) Econometrics A October 16, 2018 1 / 11 Estimation of the noise variance σ 2 In practice σ 2 too

More information

Motivation for multiple regression

Motivation for multiple regression Motivation for multiple regression 1. Simple regression puts all factors other than X in u, and treats them as unobserved. Effectively the simple regression does not account for other factors. 2. The slope

More information

Applied Statistics and Econometrics

Applied Statistics and Econometrics Applied Statistics and Econometrics Lecture 6 Saul Lach September 2017 Saul Lach () Applied Statistics and Econometrics September 2017 1 / 53 Outline of Lecture 6 1 Omitted variable bias (SW 6.1) 2 Multiple

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Chapter 1. Linear Regression with One Predictor Variable

Chapter 1. Linear Regression with One Predictor Variable Chapter 1. Linear Regression with One Predictor Variable 1.1 Statistical Relation Between Two Variables To motivate statistical relationships, let us consider a mathematical relation between two mathematical

More information

Generalized Linear Models. Kurt Hornik

Generalized Linear Models. Kurt Hornik Generalized Linear Models Kurt Hornik Motivation Assuming normality, the linear model y = Xβ + e has y = β + ε, ε N(0, σ 2 ) such that y N(μ, σ 2 ), E(y ) = μ = β. Various generalizations, including general

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Regression #8: Loose Ends

Regression #8: Loose Ends Regression #8: Loose Ends Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #8 1 / 30 In this lecture we investigate a variety of topics that you are probably familiar with, but need to touch

More information

LECTURE 5. Introduction to Econometrics. Hypothesis testing

LECTURE 5. Introduction to Econometrics. Hypothesis testing LECTURE 5 Introduction to Econometrics Hypothesis testing October 18, 2016 1 / 26 ON TODAY S LECTURE We are going to discuss how hypotheses about coefficients can be tested in regression models We will

More information

Statistical Inference with Regression Analysis

Statistical Inference with Regression Analysis Introductory Applied Econometrics EEP/IAS 118 Spring 2015 Steven Buck Lecture #13 Statistical Inference with Regression Analysis Next we turn to calculating confidence intervals and hypothesis testing

More information

The Simple Regression Model. Part II. The Simple Regression Model

The Simple Regression Model. Part II. The Simple Regression Model Part II The Simple Regression Model As of Sep 22, 2015 Definition 1 The Simple Regression Model Definition Estimation of the model, OLS OLS Statistics Algebraic properties Goodness-of-Fit, the R-square

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

Economics 582 Random Effects Estimation

Economics 582 Random Effects Estimation Economics 582 Random Effects Estimation Eric Zivot May 29, 2013 Random Effects Model Hence, the model can be re-written as = x 0 β + + [x ] = 0 (no endogeneity) [ x ] = = + x 0 β + + [x ] = 0 [ x ] = 0

More information