Lecture 4: Testing Stuff

Size: px
Start display at page:

Download "Lecture 4: Testing Stuff"

Transcription

1 Lecture 4: esting Stuff. esting Hypotheses usually has three steps a. First specify a Null Hypothesis, usually denoted, which describes a model of H 0 interest. Usually, we express H 0 as a restricted version of a more general model. i. For example, given X characteristics, Z ethnic origin and Y earnings, and a model Y = Xβ + ZΓ+ ε, we might wish to test the null hypothesis that ethnic origin is irrelevant to earnings as predicted by competitive models of discriminatory preferences. ii. Here, H : 0 Γ= 0, which implies a model Y Xβ ε. iii. he Null hypothesis always has an alternative which may or may not be specified. Here, we could imagine at least 3 kinds of alternative hypotheses: H : Γ< 0; H : Γ> 0; orh : Γ 0. Note that the last iv. alternative is equivalent to H : 0 0 Γ< or Γ>. Competitive models of discrimination also predict that ethnic origin, if correlated with preferences, should affect occupational choice, and so we model occupation W as W = Xδ + Zα + η. In this case, we might wish to test the hypothesis H : α = 0 0. b. hen, construct a test statistic which (typically) is a random variable (because it is a function of other random variables) with two features: i. it has a known distribution under the Null Hypothesis (usually, normal, or chi-square or t). ii. this known distribution may depend on data, but not on parameters (this is called pivotality a test statistic is pivotal if it satisfies this condition). iii. It is typical to express the test statistic as a standardised variate. () that is, express the asymptotic distribution above as the standardized coefficient estimate going to a standard normal, Γ = ~ N(0,) V ( Γ). the subscript on is just to keep track () With normally distributed disturbances, we have small-sample results. We know that for a sample of size N in a model with k parameters, Γ = ~ t N k V ( Γ). As N-k gets large, this t distribution goes to a standard normal.. he above test statistic is sometimes distributed as a t distribution. a. hink for a moment about the asymptotic case. We know that, given everything in the classical linear model except normality of the error terms, asymptotically

2 OLS coefficients are distributed normally: σ I N = E[ εε ']. ( ) β ~ N βσ, ( X ' X) where i. Given normality of the disturbances, this holds in the small sample as well. V ( β) = σ ( X ' X) ii. Here, is based on data, X, and a parameter, σ. We have to estimate the parameter, so we use the estimated variance: ( ) ( ) ( ) ', N V β = s X X s = ei N k i=, which is comprised of an observable nonstochastic part (X X) and a stochastic part, which is the sum of squared normals. b. What is in the test statistic? It has a numerator and denominator, two elements: β,v ( β ). i. Asymptotic. We consider the asymptotic case when we do not know the exact distribution of the disturbance term, but only know that it is wellbehaved enough that it behaves asymptotically normal. () he numerator goes to a normal. () he denominator goes to the square-root of a sum of squared normals. (3) We do not know how such a ratio behaves exactly. However, we can approximate it. he nd order aylor approximation of the ratio of the numerator divided by the square root of the estimated variance (which is random) behaves just like the nd order aylor approximation of the ratio of the numerator divided by the square root of the true variance (which is not random). (4) So, asymptotically, the ratio behaves approximately like the numerator alone, which is as a normal. ii. Small Sample. We consider the finite sample case when we do know the exact distribution of the disturbance term. We assume that it is normal. () In this case, the numerator is distributed normally. () he denominator is distributed as the square-root of the sum of squared normals. (3) So, the ratio is distributed as the ratio of a normal to the squareroot of a sum of squared normals. (4) We call this distribution a t distribution, and it differs given how many normals are summed in the denominator. here are always N-k normals there, so we use a t with N-k df. iii. Why only N-k squared normals rather than N? () OLS can fit k data points exactly, so their sample errors can be zero. here are only N-k sample errors that need to be nonzero.

3 c. hen, compare the value of the test statistic to its known distribution. i. For example, if ethnicity is univariate (eg, an aboriginal dummy) and Γ= 0.30, and its estimated standard error ( V ( Γ) ) is 0.7, = 0.30/0.7 =.76. ii. If the sample size is 00 and 0 parameters are estimated, there are 90 degrees of freedom, so the appropriate distribution to compare this test statistic to is a t with 90 degrees of freedom. () 8.% of the distribution of this t is larger than.76 in absolute value. his is the p-value for the -sided test. () he -sided test is appropriate if we are asking what is the probability under the Null that I would see a deviation-from-zero for my test statistic that is as large as what I saw? (3) 4.% of the distribution of this t is larger than -.76 in the negative direction. his is the p-value for the -sided test. (4) he -sided test is appropriate if we are asking What is the probability under the Null that I would see a test statistic as negative as the one I saw?. (5) Whether you use a -sided or -sided test depends upon your priors. If your prior was that the deviation, if any, has to be negative, then it is a -sided test. his is because in this case, you are really only thinking about the negativeness of the parameter. (6) If your prior was diffuse in the sense that you didn t know which way the violation of the Null might go, you d use a -sided test. (7) -sided tests are for testing inequality restrictions (here, the -sided test is an alternative of Γ<0), and -sided tests are for equality restrictions (here, the -sided test is an alternative of Γ 0). (8) Is 8.% a big number? Is 4.% a big number? Usually, we try to have the significance level in our head a priori. Common significance levels are 0%, 5% and %. If you are using 5%, then the -sided test for equality against a nonzero alternative does not reject, but the -sided equality against a less than zero alternative does reject. d. he significance level chosen (eg, 5%) determines the probability of a ype I error. A type I error is when we reject the Null even though it is true. he probability of a type I error is equal to the significance level. (aka size). e. A ype II error is when we fail to reject the Null even though it is false. i. For example, if in the example above, the true parameter was Γ= 0.5, then the sampling distribution of the test statistic would be centered around -3 (=-.5/.7) and not around 0. he 5% critical value for the - sided test given the Null is.96. he probability of failing to reject is the probability that the test statistic would lie in [-.96,.96] when its sampling distribution is centered on -3. his probability is 5%. f. he power of a test is the probability of making a type II error. he power of a

4 test varies with the true value of the parameter(s). 3. Confidence Regions are statements about the distribution of a random variable. a. An α % confidence region for a single random variable r with point estimate r is the set of values centered on r such that there is an α % chance in repeated samples that the point estimate would lie in the set. i. For example, we can construct a 90% confidence region for the coefficient Γ above, whose point estimate is Γ= 0.30 and whose standard error is 0.7. Since the standardised demeaned coefficient is distributed as a t with 90 degrees of freedom (that is, Γ Γ ~ t se( Γ) 90 ), and we know that the cdf of that t is 5% at a value of -.99 and is 95% at.99, we can compute endpoints of the confidence band. Γ Γ =.99 se( Γ) Γ 0.30 =.99 Γ= Γ Γ =.99 Γ= 0.64 se( Γ) () the 5% cutoff is given by solving for Γ, which yields. () the 95% cutoff is given by solving (3) here is a 90% probability that the coefficient lies in [-0.04, 0.64]. b. One can construct confidence regions for several random variables jointly as well. i. Kennedy pg has 3 good pictures for this problem. ii. he joint confidence region for two random variables r, r is the set of iii. iv. r, r values centered on such that there is an α % chance in repeated samples that the point estimate would lie in the set. If the two random variables are independent, then the joint confidence region is an untilted oval whose two lengths are in ratio to the standard errors of the two random variables. If the two random variables covary, then the joint confidence region is a tilted oval. () Imagine that the two random variables covary positively. hen, if one is a high value, we expect the other to be a high value. hus, the confidence region must be tilted, so that the region where both random variables take on high values is shown as probable. 4. Computer programs often spit out univariate confidence regions and t-statistics for each variable. From the preceding discussion, you should be able to tell that if you know the confidence region, then you know the t-statistic and vice versa for any coefficient. hus, the computer is just giving you two descriptions of the same features of the distributions of each coefficient.

5 5. ests have some common themes. For hypotheses on single variables, we often use test statistics that are distributed as t or normally. a. Most often these tests are just the standardized value of the coefficient (standardized meaning divided by standard error ). hese standardized values are called t-tests if they are distributed t (as in OLS regression coefficients), or called z-tests if they are distributed normally (as in cases where we don t have small sample results, eg, stage least squares and FGLS). b. A coefficient is called significant if its t- or z-test exceeds a critical value (often about the 5% critical value for a sided test of a standard normal variable is.96). 6. Often, we want to test joint hypotheses, where we want to know the probability that several hypotheses are true at once. Chapter 6 of Green develops these ideas. a. Eg: develop test of overidentifying restrictions Z e=0. b. Consider the model Y = Xβ + ZΓ+ ε where Z is a matrix with columns, one aboriginal dummy and one visible minority dummy (with white being the left out category). We might be interested in the joint hypothesis that both the coefficients on these variables are zero. c. H : Γ = 0& Γ = 0 H : Γ 0 Γ 0 represent the null and alternative 0 hypotheses. d. Consider first the asymptotic case, where we let N get really big which implies that the estimated coefficients go to a normal distribution: e. f. ===> g. ===> Γ Γ V ( Γ) cov ( Γ, Γ) ~ N, Γ Γ cov ( Γ, Γ) V ( Γ) V ( Γ ) ( ) cov Γ, Γ Γ Γ 0 0 ~ N, cov ( ) ( ), V 0 0 Γ Γ Γ Γ Γ Γ ( ) ( ) Γ V Γ cov Γ, Γ Γ Γ ~ χ cov ( Γ ) ( ), Γ V Γ Γ Γ Γ Γ 7. his is called a Wald est. he Wald est asks whether the discrepancy vector is big. a. It measures the squared distance of unrestricted estimates from the Null Hypothesis (aka discrepancy vector ) in the metric of the covariance of the estimates. For random variables that are normal, this distance is a chi-square. (Chi-squares are sums of squared standard normals.) i. Don t overworry about this issue of chi-square is a squared standard normal. hink about it this way. () he Wald formulation of a joint hypothesis asks: how far away are our parameters from the Null Hypothesis. If you just added the distances up (without squaring them), they could cancel each other

6 out even if both were large. hus we square them. () Since we are squaring standard normals, it would be helpful to write a table of values for, and give a name to, the distribution of sums of squared normals. We name it χ. b. he general form of a Wald est for a linear hypothesis H : Rβ r = 0 0, where β is a vector of normal random variables (eg, coefficient estimates) is Wald = ( Rβ r) '( RV ' ( β) R) ( Rβ r) ~ χ i. where J is the number of restrictions in (rank of) R. c. Wald ests for nonlinear hypotheses are similar. For a nonlinear hypothesis H : c ( β ) = 0, we need the value of the hypothesis given estimated coefficients 0 (the analogue to the discrepancy vector Rβ r ) and the slope of this with respect to β (the analogue to R). If β is a vector of normal random variables (eg, coefficient estimates), the Wald est is i. c( β ) ( ( )) c( β) ' ' ( ) Wald = c β V β ( c( β) ) ~ χ β β ii. clearly, if c is linear, this results in the linear hypothesis Wald est above. d. In small samples, these wald test statistics do not go to the chi-square distribution. he reason is that V is an estimate, the variance of which can only be ignored asymptotically. o take care of the fact that V is an estimated covariance matrix which itself is a chi-square we need access to a distribution that is a ratio of chi-squares. i. he F distribution with degrees of freedom J and N-k (two numbers which will be clarified in a moment) is the distribution of the ratio of two chisquares, one with J degrees of freedom and the other with N-k degrees of freedom, each divided by their degrees of freedom. ii. hus, the linear Wald test above is a chi-square, which is connected to the F distribution by () s / J / σ Wald ~ F J, N k () he numerator is a chi-square divided by its degrees of freedom (3) he demoninator is the scale factor that adjusts for the fact that V is estimated. It goes to as N goes to infinity, but in finite samples, it is not. (4) he denominator is itself a random variable, and since s is a sum of squared regression errors divided by N-k, it too is a chi-square (under normality of the error terms) divided by its degrees of freedom. J J

7 8. he discrepancy vector a. for a hypothesis H : c( β ) = 0 0, the discrepancy vector is the value of this function at the estimates. b. d = c( β ) c. If we are thinking of a linear hypothesis where we can write c as c( β) = Rβ r = 0, the discrepancy vector is d = Rβ r. d. We may think of the Wald test as asking whether the discrepancy vector is far from zero. e. o do this, we need to know the sampling distribution of d. If β is distributed c( β) c( β) d = c( β)~ N 0, V( β) β β asymptotically normally, then. i. Its expectation under the Null is zero, and its variance is given by the quadratic form of the Jacobian of c and the variance of β. f. he discrepancy vector may have just one element, and in this case, we could use a univariate test and ask how far out are we in its sampling distribution. If β is asymptotically normally distributed, then the scalar d is asymptotically normal c( β) c( β) V( d) = V( β ) β β d z = ~ N (0,) V( d) with variance given by the scalar. i. So, we could construct the z test. If z is bigger in absolute value than about, we reject the hypothesis. g. If the discrepancy vector has many elements, then we need to find a way to aggregate the distance of each element from zero without allowing positives and negatives to cancel each other out. We square them. i. he idea of the z test is to convert the normally distributed discrepancy to a standard normally distributed test statistic by standardising by the standard error (square root of variance). ii. he idea with a many-element discrepancy vector is to convert the joint normally distributed discrepancy vector to a vector of independent standard normals by dividing by its root-variance matrix. iii. / t = V( d) d ~ N(0, I ) where J is the length of d. iv. D J J o stop positives from cancelling negatives, we square and add up: ( t = tt = d V d) d ~ χ W D D J v. his is the Wald test right back at ya. (Plug in the formulae you ll see.)

8 9. Estimating subject to Restrictions a. he above t, Wald, and F tests have you estimate an unrestricted model and test restrictions, which is known as testing down, because you are testing down from an unrestricted model to see if restrictions hold. b. One might wish to know what the model looked like under the restrictions. Restricted estimation is often pretty easy. Consider a linear model where the restriction you wish to impose is that some element of the parameter vector is zero. In this case, we may frame the restriction as an exclusion restriction, because it implies that some variable may be excluded from the model. i. Exclusion Restrictions () he discussion above shows how you might test the exclusion restriction. Eg, look at the value of the t-stat for that coefficient. () o estimate subject to the restriction, just exclude the variable from the regression. c. Single equality restrictions. Suppose Y = Xβ + ZΓ+ ε where Z is univariate. i. Consider the restriction Γ=. () We could rewrite the model as Y Z = Xβ, so regressing Y-Z on X yields estimates satisfying the restriction. ii. Assume X and Z are univariate, eg capital and labour ratios to production output Y. In Cobb-Douglas production environments, the restriction β +Γ= would be satisfied. In this case, ( ) Γ= β Y = Xβ + Z Zβ = X Z β + Z () So, one could regress Y+Z on X-Z yielding estimates satisfying the restrictions. iii. Consider the restriction β =Γ. If X and Z are two types of human capital, we might assume they have the same effect on earnings. () Γ= Y = X + Z = ( X + Z) β β β β d. When there are multiple restrictions, sometimes they interact in funny ways, and it can be difficult or impossible to write out a regression formulation on transformed variables that does the trick. However, one can always write out a lagrangean for the restricted regression problem as min i. Where R is a set of restrictions. ii. β N i= ( Y ) i Xiβ λr β ( ) If R is a set of linear restrictions, then the solution for restricted coefficients is also linear. () If either R or the regression function is nonlinear, then the solution is typically a nonlinear function of the data.,

9 0. Goodness of Fit a. Since errors are random variables, sums of squared errors (SSR) are random variables. So, can t we use the fit of a regression (SSR) as a test statistic? i. he model is Y = Xβ + ε, and now, instead of worrying about the sampling distribution of β, we try to figure out the sampling distribution N ( ) e SSR = e = Y X β of where. i= i i i i b. Goodness of fit could be compared by comparing the SSR when we impose the Null compared to SSR when we don t impose the Null. c. his is different from the spirit of a Wald est, because to do a Wald est, you don t have to estimate under the restriction that the Null is true. Rather, you estimate a general model and ask how large is the discrepancy from the Null. d. So, you estimate under the Null, and call the sum of squared errors from this as SSR(restricted). hen, you estimate under the alternative, and call the sum of squared errors from this as SSR(unrestricted). e. First, notice that under the Null, SSR(unrestricted) and SSR(restricted) should have the same distribution because the restrictions are not binding under the Null. f. his means that we might consider using SSR SSR as part of a test statistic because its expectation under the Null is asymptotically zero. We also know that it must be weakly positive because the unrestricted model contains the restricted model as a possibility. How is this thing distributed? g. SSR(unrestricted) is related to a chi-square with N-k degrees of freedom (because k perfect fits can be had from the k parameters). However, chi-squares are sums of squared standard normals, and e is not a standard normal, because its variance goes to σ, which we can estimate by s. i. So, SSRU / σ ~ χn k ii. And, asymptotically, SSR / s ~ χ. Even though s is a random R U N k variable, we can ignore its variation asymptotically. h. SSR(restricted) is a chi-square with N-k+J degrees of freedom (because k perfect fits can be had from the k parameters, but J of these parameters are determined by restrictions). i. So, SSRR / σ ~ χn k + J. ii. And, asymptotically, N i. Recall also that ( ) SSR / s ~ χ + R N k J s = ei = SSRU / N k N k i = U

10 SSR SSR SSR SSR s SSR / N k R U R U = ~ j. So, it must be that, because we are subtracting the sum of N-k+J squared standard normals from the sum of N-k squared standard normals. i. his ignores the variation in the denominator, treating it like a constant asymptotically. We have sums of squared normals on top, and we ignore the sampling variation of the bottom. k. If we wanted to turn this into a small-sample statistic, we would have to add the assumption Y = Xβ + ε, ε ~ N(0, σ ). Given this, we can model the sampling distribution of the denominator: l. SSR / N k is a chi-square divided by its degrees of freedom. U m. he numerator is a chi-square not divided by its degrees of freedom, so if we divide it by its degrees of freedom, we get a ratio of chi-squares divided by their degrees of freedom, also known as an F variate. i. ( ) ( ) SSR SSR / J SSR SSR / J = s SSR N k R U R U U / U χ J ~ F J, N k

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators

More information

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit LECTURE 6 Introduction to Econometrics Hypothesis testing & Goodness of fit October 25, 2016 1 / 23 ON TODAY S LECTURE We will explain how multiple hypotheses are tested in a regression model We will define

More information

AGEC 621 Lecture 16 David Bessler

AGEC 621 Lecture 16 David Bessler AGEC 621 Lecture 16 David Bessler This is a RATS output for the dummy variable problem given in GHJ page 422; the beer expenditure lecture (last time). I do not expect you to know RATS but this will give

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = 0 + 1 x 1 + x +... k x k + u 6. Heteroskedasticity What is Heteroskedasticity?! Recall the assumption of homoskedasticity implied that conditional on the explanatory variables,

More information

Inference in Regression Model

Inference in Regression Model Inference in Regression Model Christopher Taber Department of Economics University of Wisconsin-Madison March 25, 2009 Outline 1 Final Step of Classical Linear Regression Model 2 Confidence Intervals 3

More information

STAT 536: Genetic Statistics

STAT 536: Genetic Statistics STAT 536: Genetic Statistics Tests for Hardy Weinberg Equilibrium Karin S. Dorman Department of Statistics Iowa State University September 7, 2006 Statistical Hypothesis Testing Identify a hypothesis,

More information

Statistical Distribution Assumptions of General Linear Models

Statistical Distribution Assumptions of General Linear Models Statistical Distribution Assumptions of General Linear Models Applied Multilevel Models for Cross Sectional Data Lecture 4 ICPSR Summer Workshop University of Colorado Boulder Lecture 4: Statistical Distributions

More information

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM Subject Business Economics Paper No and Title Module No and Title Module Tag 8, Fundamentals of Econometrics 3, The gauss Markov theorem BSE_P8_M3 1 TABLE OF CONTENTS 1. INTRODUCTION 2. ASSUMPTIONS OF

More information

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA March 6, 2017 KC Border Linear Regression II March 6, 2017 1 / 44 1 OLS estimator 2 Restricted regression 3 Errors in variables 4

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 15: Examples of hypothesis tests (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 32 The recipe 2 / 32 The hypothesis testing recipe In this lecture we repeatedly apply the

More information

Econ 836 Final Exam. 2 w N 2 u N 2. 2 v N

Econ 836 Final Exam. 2 w N 2 u N 2. 2 v N 1) [4 points] Let Econ 836 Final Exam Y Xβ+ ε, X w+ u, w N w~ N(, σi ), u N u~ N(, σi ), ε N ε~ Nu ( γσ, I ), where X is a just one column. Let denote the OLS estimator, and define residuals e as e Y X.

More information

Binary Logistic Regression

Binary Logistic Regression The coefficients of the multiple regression model are estimated using sample data with k independent variables Estimated (or predicted) value of Y Estimated intercept Estimated slope coefficients Ŷ = b

More information

Multiple Regression Theory 2006 Samuel L. Baker

Multiple Regression Theory 2006 Samuel L. Baker MULTIPLE REGRESSION THEORY 1 Multiple Regression Theory 2006 Samuel L. Baker Multiple regression is regression with two or more independent variables on the right-hand side of the equation. Use multiple

More information

Rejection regions for the bivariate case

Rejection regions for the bivariate case Rejection regions for the bivariate case The rejection region for the T 2 test (and similarly for Z 2 when Σ is known) is the region outside of an ellipse, for which there is a (1-α)% chance that the test

More information

Topic 3: Inference and Prediction

Topic 3: Inference and Prediction Topic 3: Inference and Prediction We ll be concerned here with testing more general hypotheses than those seen to date. Also concerned with constructing interval predictions from our regression model.

More information

Topic 3: Inference and Prediction

Topic 3: Inference and Prediction Topic 3: Inference and Prediction We ll be concerned here with testing more general hypotheses than those seen to date. Also concerned with constructing interval predictions from our regression model.

More information

Chapter 6. Logistic Regression. 6.1 A linear model for the log odds

Chapter 6. Logistic Regression. 6.1 A linear model for the log odds Chapter 6 Logistic Regression In logistic regression, there is a categorical response variables, often coded 1=Yes and 0=No. Many important phenomena fit this framework. The patient survives the operation,

More information

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test. Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the

More information

Regression Models. Chapter 4. Introduction. Introduction. Introduction

Regression Models. Chapter 4. Introduction. Introduction. Introduction Chapter 4 Regression Models Quantitative Analysis for Management, Tenth Edition, by Render, Stair, and Hanna 008 Prentice-Hall, Inc. Introduction Regression analysis is a very valuable tool for a manager

More information

The Standard Linear Model: Hypothesis Testing

The Standard Linear Model: Hypothesis Testing Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 25: The Standard Linear Model: Hypothesis Testing Relevant textbook passages: Larsen Marx [4]:

More information

The outline for Unit 3

The outline for Unit 3 The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.

More information

IV Estimation and its Limitations: Weak Instruments and Weakly Endogeneous Regressors

IV Estimation and its Limitations: Weak Instruments and Weakly Endogeneous Regressors IV Estimation and its Limitations: Weak Instruments and Weakly Endogeneous Regressors Laura Mayoral IAE, Barcelona GSE and University of Gothenburg Gothenburg, May 2015 Roadmap Deviations from the standard

More information

Regression Analysis. BUS 735: Business Decision Making and Research. Learn how to detect relationships between ordinal and categorical variables.

Regression Analysis. BUS 735: Business Decision Making and Research. Learn how to detect relationships between ordinal and categorical variables. Regression Analysis BUS 735: Business Decision Making and Research 1 Goals of this section Specific goals Learn how to detect relationships between ordinal and categorical variables. Learn how to estimate

More information

Interpreting Regression Results

Interpreting Regression Results Interpreting Regression Results Carlo Favero Favero () Interpreting Regression Results 1 / 42 Interpreting Regression Results Interpreting regression results is not a simple exercise. We propose to split

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Brief Suggested Solutions

Brief Suggested Solutions DEPARTMENT OF ECONOMICS UNIVERSITY OF VICTORIA ECONOMICS 366: ECONOMETRICS II SPRING TERM 5: ASSIGNMENT TWO Brief Suggested Solutions Question One: Consider the classical T-observation, K-regressor linear

More information

Part 1.) We know that the probability of any specific x only given p ij = p i p j is just multinomial(n, p) where p k1 k 2

Part 1.) We know that the probability of any specific x only given p ij = p i p j is just multinomial(n, p) where p k1 k 2 Problem.) I will break this into two parts: () Proving w (m) = p( x (m) X i = x i, X j = x j, p ij = p i p j ). In other words, the probability of a specific table in T x given the row and column counts

More information

2 Prediction and Analysis of Variance

2 Prediction and Analysis of Variance 2 Prediction and Analysis of Variance Reading: Chapters and 2 of Kennedy A Guide to Econometrics Achen, Christopher H. Interpreting and Using Regression (London: Sage, 982). Chapter 4 of Andy Field, Discovering

More information

Econometrics. 4) Statistical inference

Econometrics. 4) Statistical inference 30C00200 Econometrics 4) Statistical inference Timo Kuosmanen Professor, Ph.D. http://nomepre.net/index.php/timokuosmanen Today s topics Confidence intervals of parameter estimates Student s t-distribution

More information

Ch 7: Dummy (binary, indicator) variables

Ch 7: Dummy (binary, indicator) variables Ch 7: Dummy (binary, indicator) variables :Examples Dummy variable are used to indicate the presence or absence of a characteristic. For example, define female i 1 if obs i is female 0 otherwise or male

More information

The Multinomial Model

The Multinomial Model The Multinomial Model STA 312: Fall 2012 Contents 1 Multinomial Coefficients 1 2 Multinomial Distribution 2 3 Estimation 4 4 Hypothesis tests 8 5 Power 17 1 Multinomial Coefficients Multinomial coefficient

More information

Hypothesis Testing hypothesis testing approach

Hypothesis Testing hypothesis testing approach Hypothesis Testing In this case, we d be trying to form an inference about that neighborhood: Do people there shop more often those people who are members of the larger population To ascertain this, we

More information

Rockefeller College University at Albany

Rockefeller College University at Albany Rockefeller College University at Albany PAD 705 Handout: Suggested Review Problems from Pindyck & Rubinfeld Original prepared by Professor Suzanne Cooper John F. Kennedy School of Government, Harvard

More information

Lecture 10: F -Tests, ANOVA and R 2

Lecture 10: F -Tests, ANOVA and R 2 Lecture 10: F -Tests, ANOVA and R 2 1 ANOVA We saw that we could test the null hypothesis that β 1 0 using the statistic ( β 1 0)/ŝe. (Although I also mentioned that confidence intervals are generally

More information

LECTURE 5 HYPOTHESIS TESTING

LECTURE 5 HYPOTHESIS TESTING October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.

More information

Lecture 3: Multiple Regression

Lecture 3: Multiple Regression Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u

More information

Applied Quantitative Methods II

Applied Quantitative Methods II Applied Quantitative Methods II Lecture 4: OLS and Statistics revision Klára Kaĺıšková Klára Kaĺıšková AQM II - Lecture 4 VŠE, SS 2016/17 1 / 68 Outline 1 Econometric analysis Properties of an estimator

More information

Inferences for Regression

Inferences for Regression Inferences for Regression An Example: Body Fat and Waist Size Looking at the relationship between % body fat and waist size (in inches). Here is a scatterplot of our data set: Remembering Regression In

More information

Inference in Regression Analysis

Inference in Regression Analysis ECNS 561 Inference Inference in Regression Analysis Up to this point 1.) OLS is unbiased 2.) OLS is BLUE (best linear unbiased estimator i.e., the variance is smallest among linear unbiased estimators)

More information

LECTURE 5. Introduction to Econometrics. Hypothesis testing

LECTURE 5. Introduction to Econometrics. Hypothesis testing LECTURE 5 Introduction to Econometrics Hypothesis testing October 18, 2016 1 / 26 ON TODAY S LECTURE We are going to discuss how hypotheses about coefficients can be tested in regression models We will

More information

Chapter 12 - Lecture 2 Inferences about regression coefficient

Chapter 12 - Lecture 2 Inferences about regression coefficient Chapter 12 - Lecture 2 Inferences about regression coefficient April 19th, 2010 Facts about slope Test Statistic Confidence interval Hypothesis testing Test using ANOVA Table Facts about slope In previous

More information

Statistics and econometrics

Statistics and econometrics 1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample

More information

Lecture 9 SLR in Matrix Form

Lecture 9 SLR in Matrix Form Lecture 9 SLR in Matrix Form STAT 51 Spring 011 Background Reading KNNL: Chapter 5 9-1 Topic Overview Matrix Equations for SLR Don t focus so much on the matrix arithmetic as on the form of the equations.

More information

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI Introduction of Data Analytics Prof. Nandan Sudarsanam and Prof. B Ravindran Department of Management Studies and Department of Computer Science and Engineering Indian Institute of Technology, Madras Module

More information

Answers to Problem Set #4

Answers to Problem Set #4 Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2

More information

STA 431s17 Assignment Eight 1

STA 431s17 Assignment Eight 1 STA 43s7 Assignment Eight The first three questions of this assignment are about how instrumental variables can help with measurement error and omitted variables at the same time; see Lecture slide set

More information

Correlation 1. December 4, HMS, 2017, v1.1

Correlation 1. December 4, HMS, 2017, v1.1 Correlation 1 December 4, 2017 1 HMS, 2017, v1.1 Chapter References Diez: Chapter 7 Navidi, Chapter 7 I don t expect you to learn the proofs what will follow. Chapter References 2 Correlation The sample

More information

Contest Quiz 3. Question Sheet. In this quiz we will review concepts of linear regression covered in lecture 2.

Contest Quiz 3. Question Sheet. In this quiz we will review concepts of linear regression covered in lecture 2. Updated: November 17, 2011 Lecturer: Thilo Klein Contact: tk375@cam.ac.uk Contest Quiz 3 Question Sheet In this quiz we will review concepts of linear regression covered in lecture 2. NOTE: Please round

More information

An overview of applied econometrics

An overview of applied econometrics An overview of applied econometrics Jo Thori Lind September 4, 2011 1 Introduction This note is intended as a brief overview of what is necessary to read and understand journal articles with empirical

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Finding Relationships Among Variables

Finding Relationships Among Variables Finding Relationships Among Variables BUS 230: Business and Economic Research and Communication 1 Goals Specific goals: Re-familiarize ourselves with basic statistics ideas: sampling distributions, hypothesis

More information

ECON 5350 Class Notes Functional Form and Structural Change

ECON 5350 Class Notes Functional Form and Structural Change ECON 5350 Class Notes Functional Form and Structural Change 1 Introduction Although OLS is considered a linear estimator, it does not mean that the relationship between Y and X needs to be linear. In this

More information

Instrumental Variables

Instrumental Variables Instrumental Variables Department of Economics University of Wisconsin-Madison September 27, 2016 Treatment Effects Throughout the course we will focus on the Treatment Effect Model For now take that to

More information

Review of Statistics

Review of Statistics Review of Statistics Topics Descriptive Statistics Mean, Variance Probability Union event, joint event Random Variables Discrete and Continuous Distributions, Moments Two Random Variables Covariance and

More information

Political Science 236 Hypothesis Testing: Review and Bootstrapping

Political Science 236 Hypothesis Testing: Review and Bootstrapping Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The

More information

Lectures 5 & 6: Hypothesis Testing

Lectures 5 & 6: Hypothesis Testing Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across

More information

8. Hypothesis Testing

8. Hypothesis Testing FE661 - Statistical Methods for Financial Engineering 8. Hypothesis Testing Jitkomut Songsiri introduction Wald test likelihood-based tests significance test for linear regression 8-1 Introduction elements

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

1. The OLS Estimator. 1.1 Population model and notation

1. The OLS Estimator. 1.1 Population model and notation 1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology

More information

Multiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD

Multiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Multiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Introduction When you perform statistical inference, you are primarily doing one of two things: Estimating the boundaries

More information

Chapter 4: Regression Models

Chapter 4: Regression Models Sales volume of company 1 Textbook: pp. 129-164 Chapter 4: Regression Models Money spent on advertising 2 Learning Objectives After completing this chapter, students will be able to: Identify variables,

More information

Econometrics Summary Algebraic and Statistical Preliminaries

Econometrics Summary Algebraic and Statistical Preliminaries Econometrics Summary Algebraic and Statistical Preliminaries Elasticity: The point elasticity of Y with respect to L is given by α = ( Y/ L)/(Y/L). The arc elasticity is given by ( Y/ L)/(Y/L), when L

More information

Statistics Introductory Correlation

Statistics Introductory Correlation Statistics Introductory Correlation Session 10 oscardavid.barrerarodriguez@sciencespo.fr April 9, 2018 Outline 1 Statistics are not used only to describe central tendency and variability for a single variable.

More information

ECO220Y Simple Regression: Testing the Slope

ECO220Y Simple Regression: Testing the Slope ECO220Y Simple Regression: Testing the Slope Readings: Chapter 18 (Sections 18.3-18.5) Winter 2012 Lecture 19 (Winter 2012) Simple Regression Lecture 19 1 / 32 Simple Regression Model y i = β 0 + β 1 x

More information

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n = Hypothesis testing I I. What is hypothesis testing? [Note we re temporarily bouncing around in the book a lot! Things will settle down again in a week or so] - Exactly what it says. We develop a hypothesis,

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Topic 7: Heteroskedasticity

Topic 7: Heteroskedasticity Topic 7: Heteroskedasticity Advanced Econometrics (I Dong Chen School of Economics, Peking University Introduction If the disturbance variance is not constant across observations, the regression is heteroskedastic

More information

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation 1 Outline. 1. Motivation 2. SUR model 3. Simultaneous equations 4. Estimation 2 Motivation. In this chapter, we will study simultaneous systems of econometric equations. Systems of simultaneous equations

More information

An Introduction to Parameter Estimation

An Introduction to Parameter Estimation Introduction Introduction to Econometrics An Introduction to Parameter Estimation This document combines several important econometric foundations and corresponds to other documents such as the Introduction

More information

Chapter 4. Regression Models. Learning Objectives

Chapter 4. Regression Models. Learning Objectives Chapter 4 Regression Models To accompany Quantitative Analysis for Management, Eleventh Edition, by Render, Stair, and Hanna Power Point slides created by Brian Peterson Learning Objectives After completing

More information

Correlation and regression

Correlation and regression NST 1B Experimental Psychology Statistics practical 1 Correlation and regression Rudolf Cardinal & Mike Aitken 11 / 12 November 2003 Department of Experimental Psychology University of Cambridge Handouts:

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

1 The Multiple Regression Model: Freeing Up the Classical Assumptions

1 The Multiple Regression Model: Freeing Up the Classical Assumptions 1 The Multiple Regression Model: Freeing Up the Classical Assumptions Some or all of classical assumptions were crucial for many of the derivations of the previous chapters. Derivation of the OLS estimator

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

Ordinary Least Squares Regression Explained: Vartanian

Ordinary Least Squares Regression Explained: Vartanian Ordinary Least Squares Regression Explained: Vartanian When to Use Ordinary Least Squares Regression Analysis A. Variable types. When you have an interval/ratio scale dependent variable.. When your independent

More information

CHAPTER 2: Assumptions and Properties of Ordinary Least Squares, and Inference in the Linear Regression Model

CHAPTER 2: Assumptions and Properties of Ordinary Least Squares, and Inference in the Linear Regression Model CHAPTER 2: Assumptions and Properties of Ordinary Least Squares, and Inference in the Linear Regression Model Prof. Alan Wan 1 / 57 Table of contents 1. Assumptions in the Linear Regression Model 2 / 57

More information

ECON Introductory Econometrics. Lecture 16: Instrumental variables

ECON Introductory Econometrics. Lecture 16: Instrumental variables ECON4150 - Introductory Econometrics Lecture 16: Instrumental variables Monique de Haan (moniqued@econ.uio.no) Stock and Watson Chapter 12 Lecture outline 2 OLS assumptions and when they are violated Instrumental

More information

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Multilevel Models in Matrix Form Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Today s Lecture Linear models from a matrix perspective An example of how to do

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Multiple Regression. Midterm results: AVG = 26.5 (88%) A = 27+ B = C =

Multiple Regression. Midterm results: AVG = 26.5 (88%) A = 27+ B = C = Economics 130 Lecture 6 Midterm Review Next Steps for the Class Multiple Regression Review & Issues Model Specification Issues Launching the Projects!!!!! Midterm results: AVG = 26.5 (88%) A = 27+ B =

More information

Freeing up the Classical Assumptions. () Introductory Econometrics: Topic 5 1 / 94

Freeing up the Classical Assumptions. () Introductory Econometrics: Topic 5 1 / 94 Freeing up the Classical Assumptions () Introductory Econometrics: Topic 5 1 / 94 The Multiple Regression Model: Freeing Up the Classical Assumptions Some or all of classical assumptions needed for derivations

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 3 Jakub Mućk Econometrics of Panel Data Meeting # 3 1 / 21 Outline 1 Fixed or Random Hausman Test 2 Between Estimator 3 Coefficient of determination (R 2

More information

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X. Estimating σ 2 We can do simple prediction of Y and estimation of the mean of Y at any value of X. To perform inferences about our regression line, we must estimate σ 2, the variance of the error term.

More information

Lecture 6: Hypothesis Testing

Lecture 6: Hypothesis Testing Lecture 6: Hypothesis Testing Mauricio Sarrias Universidad Católica del Norte November 6, 2017 1 Moran s I Statistic Mandatory Reading Moran s I based on Cliff and Ord (1972) Kelijan and Prucha (2001)

More information

Quantitative Analysis and Empirical Methods

Quantitative Analysis and Empirical Methods Hypothesis testing Sciences Po, Paris, CEE / LIEPP Introduction Hypotheses Procedure of hypothesis testing Two-tailed and one-tailed tests Statistical tests with categorical variables A hypothesis A testable

More information

Statistical Inference with Regression Analysis

Statistical Inference with Regression Analysis Introductory Applied Econometrics EEP/IAS 118 Spring 2015 Steven Buck Lecture #13 Statistical Inference with Regression Analysis Next we turn to calculating confidence intervals and hypothesis testing

More information

Business Statistics. Lecture 10: Course Review

Business Statistics. Lecture 10: Course Review Business Statistics Lecture 10: Course Review 1 Descriptive Statistics for Continuous Data Numerical Summaries Location: mean, median Spread or variability: variance, standard deviation, range, percentiles,

More information

Quantitative Understanding in Biology Module II: Model Parameter Estimation Lecture I: Linear Correlation and Regression

Quantitative Understanding in Biology Module II: Model Parameter Estimation Lecture I: Linear Correlation and Regression Quantitative Understanding in Biology Module II: Model Parameter Estimation Lecture I: Linear Correlation and Regression Correlation Linear correlation and linear regression are often confused, mostly

More information

Stat 135, Fall 2006 A. Adhikari HOMEWORK 10 SOLUTIONS

Stat 135, Fall 2006 A. Adhikari HOMEWORK 10 SOLUTIONS Stat 135, Fall 2006 A. Adhikari HOMEWORK 10 SOLUTIONS 1a) The model is cw i = β 0 + β 1 el i + ɛ i, where cw i is the weight of the ith chick, el i the length of the egg from which it hatched, and ɛ i

More information

Non-Spherical Errors

Non-Spherical Errors Non-Spherical Errors Krishna Pendakur February 15, 2016 1 Efficient OLS 1. Consider the model Y = Xβ + ε E [X ε = 0 K E [εε = Ω = σ 2 I N. 2. Consider the estimated OLS parameter vector ˆβ OLS = (X X)

More information

ACCUPLACER MATH 0311 OR MATH 0120

ACCUPLACER MATH 0311 OR MATH 0120 The University of Teas at El Paso Tutoring and Learning Center ACCUPLACER MATH 0 OR MATH 00 http://www.academics.utep.edu/tlc MATH 0 OR MATH 00 Page Factoring Factoring Eercises 8 Factoring Answer to Eercises

More information

Contingency Tables. Safety equipment in use Fatal Non-fatal Total. None 1, , ,128 Seat belt , ,878

Contingency Tables. Safety equipment in use Fatal Non-fatal Total. None 1, , ,128 Seat belt , ,878 Contingency Tables I. Definition & Examples. A) Contingency tables are tables where we are looking at two (or more - but we won t cover three or more way tables, it s way too complicated) factors, each

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

Topic 10: Panel Data Analysis

Topic 10: Panel Data Analysis Topic 10: Panel Data Analysis Advanced Econometrics (I) Dong Chen School of Economics, Peking University 1 Introduction Panel data combine the features of cross section data time series. Usually a panel

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

STA442/2101: Assignment 5

STA442/2101: Assignment 5 STA442/2101: Assignment 5 Craig Burkett Quiz on: Oct 23 rd, 2015 The questions are practice for the quiz next week, and are not to be handed in. I would like you to bring in all of the code you used to

More information

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math. Regression, part II I. What does it all mean? A) Notice that so far all we ve done is math. 1) One can calculate the Least Squares Regression Line for anything, regardless of any assumptions. 2) But, if

More information