Bias Variance Trade-off

Size: px
Start display at page:

Download "Bias Variance Trade-off"

Transcription

1 Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 )

2 MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)] + [E(ˆθ) θ]) 2 ) = E([ˆθ E(ˆθ)] 2 ) + 2 E([E(ˆθ) θ][ˆθ E(ˆθ)]) + E([E(ˆθ) θ] 2 ) = Var(ˆθ) + 2 E([E(ˆθ)[ˆθ E(ˆθ)] θ[ˆθ E(ˆθ)])) + (B(ˆθ)) 2 = Var(ˆθ) + 2(0 + 0) + (B(ˆθ)) 2 = Var(ˆθ) + (B(ˆθ)) 2

3 s 2 estimator for σ 2 for Single population Sum of Squares: n (Y i Ȳ i ) 2 Sample Variance Estimator: i=1 s 2 = n i=1 (Y i Ȳ i ) 2 n 1 s 2 is an unbiased estimator of σ 2. The sum of squares SSE has n 1 degrees of freedom associated with it, one degree of freedom is lost by using Ȳ as an estimate of the unknown population mean µ.

4 Estimating Error Term Variance σ 2 for Regression Model Regression model Variance of each observation Y i is σ 2 (the same as for the error term ɛ i ). Each Y i comes from a different probability distribution with different means that depend on the level X i The deviation of an observation Y i must be calculated around its own estimated mean.

5 s 2 estimator for σ 2 s 2 = MSE = SSE n 2 = (Yi Ŷi) 2 n 2 MSE is an unbiased estimator of σ 2 E(MSE) = σ 2 = e 2 i n 2 The sum of squares SSE has n 2 degrees of freedom associated with it, two degrees of freedom is lost when estimating β 0 and β 1. Cochran s theorem (later in the course) tells us where degree s of freedom come from and how to calculate them.

6 Normal Error Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor variable in the i th trial ɛ i iid N(0, σ 2 ) note this is different, now we know the distribution i = 1,..., n

7 Maximum Likelihood Estimator(s) β 0 b 0 same as in least squares case β 1 b 1 same as in least squares case σ 2 ˆσ 2 = i (Y i Ŷ i ) 2 Note that ML estimator is biased as s 2 is unbiased and s 2 = MSE = n n n 2 ˆσ2

8 Inference in the Normal Error Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor variable in the i th trial ɛ i iid N(0, σ 2 ) i = 1,..., n

9 Inference concerning β 1 Tests concerning β 1 (the slope) are often of interest, particularly H 0 : β 1 = 0 H a : β 1 0 the null hypothesis model Y i = β 0 + (0)X i + ɛ i implies that there is no relationship between Y and X. Note the means of all the Y i s are equal at all levels of X i.

10 Sampling Dist. Of b 1 The point estimator for b 1 is b 1 = (Xi X )(Y i Ȳ ) (Xi X ) 2 The sampling distribution for b 1 is the distribution of b 1 that arises from the variability of b 1 when the predictor variables X i are held fixed and the observed outputs are repeatedly sampled Note that the sampling distribution we derive for b 1 will be highly dependent on our modeling assumptions.

11 Sampling Dist. Of b 1 In Normal Regr. Model For a normal error regression model the sampling distribution of b 1 is normal, with mean and variance given by E(b 1 ) = β 1 Var(b 1 ) = σ 2 (Xi X ) 2 To show this we need to go through a number of algebraic steps.

12 (Xi X )(Y i Ȳ ) = (X i X )Y i First step To show we observe (Xi X )(Y i Ȳ ) = (X i X )Y i (X i X )Ȳ = (X i X )Y i Ȳ (X i X ) = (X i X )Y i Ȳ X i + Ȳ n = (X i X )Y i Xi n

13 b 1 as convex combination of Y i s b 1 can be expressed as a linear combination of the Y i s b 1 = (Xi X )(Y i Ȳ ) (Xi X ) 2 = (Xi X )Y i (Xi X ) 2 from previous slide = k i Y i where k i = (X i X ) (Xi X ) 2

14 Properties of the k i s It can be shown that ki = 0 ki X i = 1 k 2 i = 1 (Xi X ) 2 (possible homework). We will use these properties to prove various properties of the sampling distributions of b 1 and b 0.

15 Normality of b 1 s Sampling Distribution Useful fact: A linear combination of independent normal random variables is normally distributed More formally: when Y1,..., Y n are independent normal random variables, the linear combination a 1 Y 1 + a 2 Y a n Y n is normally distributed, with mean ai E(Y i ) and variance ai 2 Var(Y i )

16 Normality of b 1 s Sampling Distribution Since b 1 is a linear combination of the Y i s and each Y i is an independent normal random variable, then b 1 is distributed normally as well b 1 = k i Y i, k i = (X i X ) (Xi X ) 2 From previous slide E(b 1 ) = k i E(Y i ), Var(b 1 ) = k 2 i Var(Y i )

17 b 1 is an unbiased estimator This can be seen using two of the properties E(b 1 ) = E( k i Y i ) = k i E(Y i ) = k i (β 0 + β 1 X i ) = β 0 ki + β 1 ki X i = β 0 (0) + β 1 (1) = β 1

18 Variance of b 1 Since the Y i are independent random variables with variance σ 2 and the k i s are constants we get Var(b 1 ) = Var( k i Y i ) = k 2 i Var(Y i ) = ki 2 σ 2 = σ 2 ki 2 = σ 2 1 (Xi X ) 2 However, in most cases, σ 2 is unknown.

19 Estimated variance of b 1 When we don t know σ 2 then we have to replace it with the MSE estimate Let where s 2 = MSE = SSE n 2 SSE = e 2 i and e i = Y i Ŷi plugging in we get Var(b 1 ) = ˆ Var(b 1 ) = σ 2 (Xi X ) 2 s 2 (Xi X ) 2

20 Recap We now have an expression for the sampling distribution of b 1 when σ 2 is known b 1 N (β 1, σ 2 (Xi X ) 2 ) (1) When σ 2 is unknown we have an unbiased point estimator of σ 2 ˆ Var(b 1 ) = s 2 (Xi X ) 2 As n (i.e. the number of observations grows large) ˆ Var(b 1 ) Var(b 1 ) and we can use Eqn. 1. Questions When is n big enough? What if n isn t big enough?

21 Digression : Gauss-Markov Theorem In a regression model where E(ɛ i ) = 0 and variance Var(ɛ i ) = σ 2 < and ɛ i and ɛ j are uncorrelated for all i and j the least squares estimators b 0 and b 1 are unbiased and have minimum variance among all unbiased linear estimators. Remember b 1 = (Xi X )(Y i Ȳ ) (Xi X ) 2 b 0 = Ȳ b 1 X

22 Proof The theorem states that b 1 as minimum variance among all unbiased linear estimators of the form ˆβ 1 = c i Y i As this estimator must be unbiased we have E( ˆβ 1 ) = c i E(Y i ) = β 1 = c i (β 0 + β 1 X i ) = β 0 ci + β 1 ci X i = β 1

23 Proof cont. Given these constraints β 0 ci + β 1 ci X i = β 1 clearly it must be the case that c i = 0 and c i X i = 1 The variance of this estimator is Var( ˆβ 1 ) = c 2 i Var(Y i ) = σ 2 c 2 i

24 Proof cont. Now define c i = k i + d i where the k i are the constants we already defined and the d i are arbitrary constants. k i = (X i X ) (Xi X ) 2 Let s look at the variance of the estimator Var( ˆβ 1 ) = c 2 i Var(Y i ) = σ 2 (k i + d i ) 2 Note we just demonstrated that = σ 2 ( k 2 i + d 2 i + 2 k i d i ) σ 2 k 2 i = Var(b 1 )

25 Proof cont. Now by showing that k i d i = 0 we re almost done ki d i = k i (c i k i ) = k i (c i k i ) = k i c i k 2 i = ( Xi c X ) 1 i (Xi X ) 2 (Xi X ) 2 = ci X i X c i (Xi X ) 2 1 (Xi X ) 2 = 0

26 Proof end So we are left with Var( ˆβ 1 ) = σ 2 ( k 2 i + d 2 i ) = Var(b 1 ) + σ 2 ( d 2 i ) which is minimized when all the d i = 0. This means that the least squares estimator b 1 has minimum variance among all unbiased linear estimators.

27 Sampling Distribution of (b 1 β 1 )/S(b 1 ) b 1 is normally distributed so (b 1 β 1 )/( Var(b 1 )) is a standard normal variable We don t know Var(b 1 ) so it must be estimated from data. We have already denoted it s estimate If using the estimate ˆV (b 1 ) it can be shown that b 1 β 1 Ŝ(b 1 ) Ŝ(b 1 ) = t(n 2) ˆV (b 1 )

28 Where does this come from? For now we need to rely upon the following theorem For the normal error regression model SSE (Yi Ŷ i ) 2 σ 2 = σ 2 χ 2 (n 2) and is independent of b 0 and b 1 Intuitively this follows the standard result for the sum of squared normal random variables Here there are two linear constraints imposed by the regression parameter estimation that each reduce the number of degrees of freedom by one. We will revisit this subject soon.

29 Another useful fact : t distributed random variables Let z and χ 2 (ν) be independent random variables (standard normal and χ 2 respectively). The following random variable is a t-dstributed random variable: t(ν) = z χ 2 (ν) ν This version of the t distribution has one parameter, the degrees of freedom ν

30 Distribution of the studentized statistic To derive the distribution of this statistic, first we do the following rewrite b 1 β 1 b 1 β 1 S(b = 1 ) Ŝ(b 1 ) Ŝ(b 1 ) S(b 1 ) Ŝ(b 1 ) S(b 1 ) = ˆV (b 1 ) Var(b 1 )

31 Studentized statistic cont. And note the following ˆV (b 1 ) Var(b 1 ) = MSE (Xi X ) 2 σ 2 (Xi X ) 2 = MSE σ 2 = SSE σ 2 (n 2) where we know (by the given theorem) the distribution of the last term is χ 2 and indep. of b 1 and b 0 SSE σ 2 (n 2) χ2 (n 2) n 2

32 Studentized statistic final But by the given definition of the t distribution we have our result b 1 β 1 Ŝ(b 1 ) t(n 2) because putting everything together we can see that b 1 β 1 Ŝ(b 1 ) z χ 2 (n 2) n 2

33 Confidence Intervals and Hypothesis Tests Now that we know the sampling distribution of b 1 (t with n-2 degrees of freedom) we can construct confidence intervals and hypothesis tests easily. Things to think about What does the t-distribution look like? Why is the estimator distributed according to a t-distribution rather than a normal distribution? When performing tests why does this matter? When is it safe to cheat and use a normal approximation?

34 Quick Review : Hypothesis Testing Elements of a statistical test Null hypothesis, H 0 Alternative hypothesis, Ha Test statistic Rejection region

35 Quick Review : Hypothesis Testing - Errors Errors A type I error is made if H 0 is rejected when H 0 is true. The probability of a type I error is denoted by α. The value of α is called the level of the test. A type II error is made if H0 is accepted when H a is true. The probability of a type II error is denoted by β.

36 P-value The p-value, or attained significance level, is the smallest level of significance α for which the observed data indicate that the null hypothesis should be rejected.

37 Hypothesis Testing Example: Court Room Trial A statistical test procedure is comparable to a trial; a defendant is considered innocent as long as his guilt is not proven. The prosecutor tries to prove the guilt of the defendant. Only when there is enough charging evidence the defendant is condemned. In the start of the procedure, there are two hypotheses H 0 (null hypothesis): the defendant is innocent H a (alternative hypothesis): the defendant is guilty. H 0 is true H a is true Accept H 0 Right decision Type II Error Reject H 0 Type I Error Right decision

38 Hypothesis Testing Example: Court Room Trial, cont. The hypothesis of innocence is only rejected when an error is very unlikely, because one doesn t want to condemn an innocent defendant. Such an error is called error of the first kind (i.e. the condemnation of an innocent person), and the occurrence of this error is controlled to be seldom. As a consequence of this asymmetric behaviour, the error of the second kind (setting free a guilty person), is often rather large.

39 Null Hypothesis If the null hypothesis is that β 1 = 0 then b 1 should fall in the range around zero. The further it is from 0 the less likely the null hypothesis is to hold.

40 Alternative Hypothesis : Least Squares Fit If we find that our estimated value of b 1 deviates from 0 then we have to determine whether or not that deviation would be surprising given the model and the sampling distribution of the estimator. If it is sufficiently (where we define what sufficient is by a confidence level) different then we reject the null hypothesis.

41 Testing This Hypothesis Only have a finite sample Different finite set of samples (from the same population / source) will (almost always) produce different point estimates of β 0 and β 1 (b 0, b 1 ) given the same estimation procedure Key point: b 0 and b 1 are random variables whose sampling distributions can be statistically characterized Hypothesis tests about β 0 and β 1 can be constructed using these distributions. The same techniques for deriving the sampling distribution of b = [b 0, b 1 ] are used in multiple regression.

42 Confidence Interval Example A machine fills cups with margarine, and is supposed to be adjusted so that the content of the cups is µ = 250g of margarine. Observed random variable X Normal(250, 2.5)

43 Confidence Interval Example, Cont. X 1,..., X 25, a random sample from X. The natural estimator is the sample mean: ˆµ = X = 1 n n i=1 X i. Suppose the sample shows actual weights X 1,..., X 25, with mean: X = 1 25 X i = 250.2grams. 25 i=1

44 Say we want to get a confidence interval for µ. By standardizing, we get a random variable Z = X µ σ/ n = X µ 0.5 P( z Z z) = 1 α = The number z follows from the cumulative distribution function: Φ(z) = P(Z z) = 1 α 2 = 0.975, (2) z = Φ 1 (Φ(z)) = Φ 1 (0.975) = 1.96, (3)

45 Now we get: 0.95 = 1 α = P( z Z z) = P ( = P X 1.96 n σ µ X n σ ) ( 1.96 X ) µ σ/ n 1.96 (4) (5) = P ( X µ X ) (6) = P ( X 0.98 µ X ). (7) This might be interpreted as: with probability 0.95 we will find a confidence interval in which we will meet the parameter µ between the stochastic endpoints X 0.98 and X

46 Therefore, our 0.95 confidence interval becomes: ( X 0.98; X +0.98) = ( ; ) = (249.22; ).

47 Recap We know that the point estimator of β 1 is b 1 = (Xi X )(Y i Ȳ ) (Xi X ) 2 Last class we derived the sampling distribution of b 1, it being N(β 1, Var(b 1 ))(when σ 2 known) with Var(b 1 ) = σ 2 {b 1 } = σ 2 (Xi X ) 2 And we suggested that an estimate of Var(b 1 ) could be arrived at by substituting the MSE for σ 2 when σ 2 is unknown. s 2 {b 1 } = MSE (Xi X ) 2 = SSE n 2 (Xi X ) 2

48 Sampling Distribution of (b 1 β 1 )/s{b 1 } Since b 1 is normally distribute, (b 1 β 1 )/σ{b 1 } is a standard normal variable N(0, 1) We don t know Var(b 1 ) so it must be estimated from data. We have already denoted its estimate s 2 {b 1 } Using this estimate we it can be shown that b 1 β 1 s{b 1 } t(n 2) where s{b 1 } = s 2 {b 1 } It is from this fact that our confidence intervals and tests will derive.

49 Where does this come from? We need to rely upon (but will not derive) the following theorem For the normal error regression model SSE (Yi Ŷ i ) 2 σ 2 = σ 2 χ 2 (n 2) and is independent of b 0 and b 1. Here are two linear constraints b 1 = b 0 = Ȳ b 1 X (Xi X )(Y i Ȳ ) (Xi X ) 2 = i k i Y i, k i = X i X i (X i X ) 2 imposed by the regression parameter estimation that each reduce the number of degrees of freedom by one (total two).

50 Reminder: normal (non-regression) estimation Intuitively the regression result from the previous slide follows the standard result for the sum of squared standard normal random variables. First, with σ and µ known n i=1 Z 2 i = n ( ) Yi µ 2 χ 2 (n) σ i=1 and then with µ unknown S 2 = (n 1)S 2 σ 2 = 1 n (Y i Ȳ ) 2 n 1 i=1 n ( Yi Ȳ ) 2 χ 2 (n 1) i=1 σ and Ȳ and S 2 are independent.

51 Reminder: normal (non-regression) estimation cont. With both µ and σ unknown then (Ȳ ) µ n t(n 1) S because n (Ȳ µ S ) = n(ȳ µ)/σ [(n 1)S 2 /σ 2 ]/(n 1) = N(0, 1) χ 2 (n 1)/(n 1)

52 Another useful fact : Student-t distribution Let Z and χ 2 (ν) be independent random variables (standard normal and χ 2 respectively). We then define a t random variable as follows: Z t(ν) = χ 2 (ν) ν This version of the t distribution has one parameter, the degrees of freedom ν

53 Studentized statistic But by the given definition of the t distribution we have our result b 1 β 1 s{b 1 } t(n 2) because putting everything together we can see that b 1 β 1 s{b 1 } z χ 2 (n 2) n 2

54 Confidence Intervals and Hypothesis Tests Now that we know the sampling distribution of b 1 (t with n-2 degrees of freedom) we can construct confidence intervals and hypothesis tests easily

55 Confidence Interval for β 1 Since the studentized statistic follows a t distribution we can make the following probability statement P(t(α/2; n 2) b 1 β 1 s{b 1 } t(1 α/2; n 2)) = 1 α

56 Remember Density: f (y) = df (y) dy Distribution (CDF): F (y) = P(Y y) = y f (t)dt Inverse CDF: F 1 y (p) = y s.t. f (t)dt = p

57 Interval arriving from picking α Note that by symmetry t(α/2; n 2) = t(1 α/2; n 2) Rearranging terms and using this fact we have P(b 1 t(1 α/2; n 2)s{b 1 } β 1 b 1 + t(1 α/2; n 2)s{b 1 }) = 1 α And now we can use a table to look up and produce confidence intervals

58 Using tables for Computing Intervals The tables in the book (table B.2 in the appendix) for t(1 α/2; ν) where P{t(ν) t(1 α/2; ν)} = A Provides the inverse CDF of the t-distribution

59 1 α confidence limits for β 1 The 1 α confidence limits for β 1 are b 1 ± t(1 α/2; n 2)s{b 1 } Note that this quantity can be used to calculate confidence intervals given n and α. Fixing α can guide the choice of sample size if a particular confidence interval is desired Given a sample size, vice versa. Also useful for hypothesis testing

60 Tests Concerning β 1 Example 1 Two-sided test H 0 : β 1 = 0 H a : β 1 0 Test statistic t = b1 0 s{b 1}

61 Tests Concerning β 1 We have an estimate of the sampling distribution of b 1 from the data. If the null hypothesis holds then the b 1 estimate coming from the data should be within the 95% confidence interval of the sampling distribution centered at 0 (in this case) t = b 1 0 s{b 1 }

62 Decision rules if t t(1 α/2; n 2), accept H 0 if t > t(1 α/2; n 2), reject H 0 Absolute values make the test two-sided

63 Calculating the p-value The p-value, or attained significance level, is the smallest level of significance α for which the observed data indicate that the null hypothesis should be rejected. This can be looked up using the CDF of the test statistic.

64 p-value Example An experiment is performed to determine whether a coin flip is fair (50% chance, each, of landing heads or tails) or unfairly biased (50% chance of one of the outcomes). Outcome: Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. p-value: The p-value of this result would be the chance of a fair coin landing on heads at least 14 times out of 20 flips Calculation: Prob(14 heads) + Prob(15 heads) + + Prob(20 heads) (8) = 1 [( ) ( ) ( )] = 60, (9) ,048,576

65 Inferences Concerning β 0 Largely, inference procedures regarding β 0 can be performed in the same way as those for β 1 Remember the point estimator b 0 for β 0 b 0 = Ȳ b 1 X

66 Sampling distribution of b 0 The sampling distribution of b 0 refers to the different values of b 0 that would be obtained with repeated sampling when the levels of the predictor variable X are held constant from sample to sample. For the normal regression model the sampling distribution of b 0 is normal

67 Sampling distribution of b 0 When error variance is known E(b 0 ) = β 0 σ 2 {b 0 } = σ 2 ( 1 n + X 2 (Xi X ) 2 ) When error variance is unknown s 2 {b 0 } = MSE( 1 n + X 2 (Xi X ) 2 )

68 Confidence interval for β 0 The 1 α confidence limits for β 0 are obtained in the same manner as those for β 1 b 0 ± t(1 α/2; n 2)s{b 0 }

69 Considerations on Inferences on β 0 and β 1 Effects of departures from normality The estimators of β 0 and β 1 have the property of asymptotic normality - their distributions approach normality as the sample size increases (under general conditions) Spacing of the X levels The variances of b 0 and b 1 (for a given n and σ 2 ) depend strongly on the spacing of X

70 Sampling distribution of point estimator of mean response Let X h be the level of X for which we would like an estimate of the mean response Needs to be one of the observed X s The mean response when X = X h is denoted by E(Y h ) The point estimator of E(Y h ) is Ŷ h = b 0 + b 1 X h We are interested in the sampling distribution of this quantity

71 Sampling Distribution of Ŷh We have Ŷ h = b 0 + b 1 X h Since this quantity is itself a linear combination of the Y i s it s sampling distribution is itself normal. The mean of the sampling distribution is Biased or unbiased? E{Ŷh} = E{b 0 } + E{b 1 }X h = β 0 + β 1 X h

72 Sampling Distribution of Ŷh To derive the sampling distribution variance of the mean response we first show that b 1 and (1/n) Y i are uncorrelated and, hence, for the normal error regression model independent We start with the definitions Ȳ = ( 1 n )Y i b 1 = k i Y i, k i = (X i X ) (Xi X ) 2

73 Sampling Distribution of Ŷh We want to show that mean response and the estimate b 1 are uncorrelated Cov(Ȳ, b 1) = σ 2 {Ȳ, b 1} = 0 To do this we need the following result (A.32) n n σ 2 { a i Y i, c i Y i } = i=1 i=1 n a i c i σ 2 {Y i } i=1 when the Y i are independent

74 Sampling Distribution of Ŷh Using this fact we have σ 2 { n i=1 1 n Y i, n k i Y i } = i=1 So the Ȳ and b 1 are uncorrelated = n i=1 n i=1 = σ2 n = 0 1 n k iσ 2 {Y i } 1 n k iσ 2 n i=1 k i

75 Sampling Distribution of Ŷh This means that we can write down the variance σ 2 {Ŷ h } = σ 2 {Ȳ + b 1 (X h X )} alternative and equivalent form of regression function But we know that the mean of Y and b 1 are uncorrelated so σ 2 {Ŷ h } = σ 2 {Ȳ } + σ 2 {b 1 }(X h X ) 2

76 Sampling Distribution of Ŷh We know (from last lecture) σ 2 {b 1 } = s 2 {b 1 } = σ 2 (Xi X ) 2 MSE (Xi X ) 2 And we can find σ 2 {Ȳ } = 1 n 2 σ 2 {Y i } = nσ2 n 2 = σ2 n

77 Sampling Distribution of Ŷh So, plugging in, we get σ 2 {Ŷ h } = σ2 n + σ 2 (Xi X ) 2 (X h X ) 2 Or ( 1 σ 2 {Ŷ h } = σ 2 n + (X h X ) 2 ) (Xi X ) 2

78 Sampling Distribution of Ŷh Since we often won t know σ 2 we can, as usual, plug in S 2 = SSE/(n 2), our estimate for it to get our estimate of this sampling distribution variance ( 1 s 2 {Ŷ h } = S 2 n + (X h X ) 2 ) (Xi X ) 2

79 No surprise... The sampling distribution of our point estimator for the output is distributed as a t-distribution with two degrees of freedom Ŷ h E{Y h } t(n 2) s{ŷ h } This means that we can construct confidence intervals in the same manner as before.

80 Confidence Intervals for E(Y h ) The 1 α confidence intervals for E(Y h ) are Ŷ h ± t(1 α/2; n 2)s{Ŷ h } From this hypothesis tests can be constructed as usual.

81 Comments The variance of the estimator fore(y h ) is smallest near the mean of X. Designing studies such that the mean of X is near X h will improve inference precision When X h is zero the variance of the estimator for E(Y h ) reduces to the variance of the estimator b 0 for β 0

82 Prediction interval for single new observation Essentially follows the sampling distribution arguments for E(Y h ) If all regression parameters are known then the 1 α prediction interval for a new observation Y h is E{Y h } ± z(1 α/2)σ

83 Prediction interval for single new observation If the regression parameters are unknown the 1 α prediction interval for a new observation Y h is given by the following theorem Ŷ h ± t(1 α/2; n 2)s{pred} This is very nearly the same as prediction for a known value of X but includes a correction for the fact that there is additional variability arising from the fact that the new input location was not used in the original estimates of b 1, b 0, and s 2

84 Prediction interval for single new observation We have σ 2 {pred} = σ 2 {Y h Ŷ h } = σ 2 {Y h } + σ 2 {Ŷ h } = σ 2 + σ 2 {Ŷ h } An unbiased estimator of σ 2 {pred} is s 2 {pred} = MSE + s 2 {Ŷ h }, which is given by s 2 {pred} = MSE [1 + 1n + (X h X ) 2 ] (Xi X ) 2

Inference in Normal Regression Model. Dr. Frank Wood

Inference in Normal Regression Model. Dr. Frank Wood Inference in Normal Regression Model Dr. Frank Wood Remember We know that the point estimator of b 1 is b 1 = (Xi X )(Y i Ȳ ) (Xi X ) 2 Last class we derived the sampling distribution of b 1, it being

More information

Inference in Regression Analysis

Inference in Regression Analysis Inference in Regression Analysis Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 1 Today: Normal Error Regression Model Y i = β 0 + β 1 X i + ǫ i Y i value

More information

Formal Statement of Simple Linear Regression Model

Formal Statement of Simple Linear Regression Model Formal Statement of Simple Linear Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor

More information

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 Minimize this by maximizing

More information

Introduction to Simple Linear Regression

Introduction to Simple Linear Regression Introduction to Simple Linear Regression Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68 About me Faculty in the Department

More information

Regression Estimation Least Squares and Maximum Likelihood

Regression Estimation Least Squares and Maximum Likelihood Regression Estimation Least Squares and Maximum Likelihood Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 3, Slide 1 Least Squares Max(min)imization Function to minimize

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

LECTURE 5. Introduction to Econometrics. Hypothesis testing

LECTURE 5. Introduction to Econometrics. Hypothesis testing LECTURE 5 Introduction to Econometrics Hypothesis testing October 18, 2016 1 / 26 ON TODAY S LECTURE We are going to discuss how hypotheses about coefficients can be tested in regression models We will

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009 EAMINERS REPORT & SOLUTIONS STATISTICS (MATH 400) May-June 2009 Examiners Report A. Most plots were well done. Some candidates muddled hinges and quartiles and gave the wrong one. Generally candidates

More information

Regression Models - Introduction

Regression Models - Introduction Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)

More information

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is Q = (Y i β 0 β 1 X i1 β 2 X i2 β p 1 X i.p 1 ) 2, which in matrix notation is Q = (Y Xβ) (Y

More information

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline)

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) Gujarati D. Basic Econometrics, Appendix A.8 Barrow M. Statistics

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

INTERVAL ESTIMATION AND HYPOTHESES TESTING

INTERVAL ESTIMATION AND HYPOTHESES TESTING INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Statistical Inference with Regression Analysis

Statistical Inference with Regression Analysis Introductory Applied Econometrics EEP/IAS 118 Spring 2015 Steven Buck Lecture #13 Statistical Inference with Regression Analysis Next we turn to calculating confidence intervals and hypothesis testing

More information

Remedial Measures, Brown-Forsythe test, F test

Remedial Measures, Brown-Forsythe test, F test Remedial Measures, Brown-Forsythe test, F test Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 7, Slide 1 Remedial Measures How do we know that the regression function

More information

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics A short review of the principles of mathematical statistics (or, what you should have learned in EC 151).

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.

More information

Y i = η + ɛ i, i = 1,...,n.

Y i = η + ɛ i, i = 1,...,n. Nonparametric tests If data do not come from a normal population (and if the sample is not large), we cannot use a t-test. One useful approach to creating test statistics is through the use of rank statistics.

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

Review. December 4 th, Review

Review. December 4 th, Review December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter

More information

ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12

ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12 ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12 Winter 2012 Lecture 13 (Winter 2011) Estimation Lecture 13 1 / 33 Review of Main Concepts Sampling Distribution of Sample Mean

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 12: Frequentist properties of estimators (v4) Ramesh Johari ramesh.johari@stanford.edu 1 / 39 Frequentist inference 2 / 39 Thinking like a frequentist Suppose that for some

More information

Simple linear regression

Simple linear regression Simple linear regression Biometry 755 Spring 2008 Simple linear regression p. 1/40 Overview of regression analysis Evaluate relationship between one or more independent variables (X 1,...,X k ) and a single

More information

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11 Econometrics A Keio University, Faculty of Economics Simple linear model (2) Simon Clinet (Keio University) Econometrics A October 16, 2018 1 / 11 Estimation of the noise variance σ 2 In practice σ 2 too

More information

ECO220Y Simple Regression: Testing the Slope

ECO220Y Simple Regression: Testing the Slope ECO220Y Simple Regression: Testing the Slope Readings: Chapter 18 (Sections 18.3-18.5) Winter 2012 Lecture 19 (Winter 2012) Simple Regression Lecture 19 1 / 32 Simple Regression Model y i = β 0 + β 1 x

More information

Regression #3: Properties of OLS Estimator

Regression #3: Properties of OLS Estimator Regression #3: Properties of OLS Estimator Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #3 1 / 20 Introduction In this lecture, we establish some desirable properties associated with

More information

The t-distribution. Patrick Breheny. October 13. z tests The χ 2 -distribution The t-distribution Summary

The t-distribution. Patrick Breheny. October 13. z tests The χ 2 -distribution The t-distribution Summary Patrick Breheny October 13 Patrick Breheny Biostatistical Methods I (BIOS 5710) 1/25 Introduction Introduction What s wrong with z-tests? So far we ve (thoroughly!) discussed how to carry out hypothesis

More information

Lecture 18: Simple Linear Regression

Lecture 18: Simple Linear Regression Lecture 18: Simple Linear Regression BIOS 553 Department of Biostatistics University of Michigan Fall 2004 The Correlation Coefficient: r The correlation coefficient (r) is a number that measures the strength

More information

Statistical Distribution Assumptions of General Linear Models

Statistical Distribution Assumptions of General Linear Models Statistical Distribution Assumptions of General Linear Models Applied Multilevel Models for Cross Sectional Data Lecture 4 ICPSR Summer Workshop University of Colorado Boulder Lecture 4: Statistical Distributions

More information

Lecture 3: Inference in SLR

Lecture 3: Inference in SLR Lecture 3: Inference in SLR STAT 51 Spring 011 Background Reading KNNL:.1.6 3-1 Topic Overview This topic will cover: Review of hypothesis testing Inference about 1 Inference about 0 Confidence Intervals

More information

Statistical inference

Statistical inference Statistical inference Contents 1. Main definitions 2. Estimation 3. Testing L. Trapani MSc Induction - Statistical inference 1 1 Introduction: definition and preliminary theory In this chapter, we shall

More information

Review of Statistics

Review of Statistics Review of Statistics Topics Descriptive Statistics Mean, Variance Probability Union event, joint event Random Variables Discrete and Continuous Distributions, Moments Two Random Variables Covariance and

More information

Estimators as Random Variables

Estimators as Random Variables Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until

More information

Making sense of Econometrics: Basics

Making sense of Econometrics: Basics Making sense of Econometrics: Basics Lecture 2: Simple Regression Egypt Scholars Economic Society Happy Eid Eid present! enter classroom at http://b.socrative.com/login/student/ room name c28efb78 Outline

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

STAT 540: Data Analysis and Regression

STAT 540: Data Analysis and Regression STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State

More information

Section 3: Simple Linear Regression

Section 3: Simple Linear Regression Section 3: Simple Linear Regression Carlos M. Carvalho The University of Texas at Austin McCombs School of Business http://faculty.mccombs.utexas.edu/carlos.carvalho/teaching/ 1 Regression: General Introduction

More information

Applied Quantitative Methods II

Applied Quantitative Methods II Applied Quantitative Methods II Lecture 4: OLS and Statistics revision Klára Kaĺıšková Klára Kaĺıšková AQM II - Lecture 4 VŠE, SS 2016/17 1 / 68 Outline 1 Econometric analysis Properties of an estimator

More information

Chapter 2 Inferences in Simple Linear Regression

Chapter 2 Inferences in Simple Linear Regression STAT 525 SPRING 2018 Chapter 2 Inferences in Simple Linear Regression Professor Min Zhang Testing for Linear Relationship Term β 1 X i defines linear relationship Will then test H 0 : β 1 = 0 Test requires

More information

Inference for Regression

Inference for Regression Inference for Regression Section 9.4 Cathy Poliak, Ph.D. cathy@math.uh.edu Office in Fleming 11c Department of Mathematics University of Houston Lecture 13b - 3339 Cathy Poliak, Ph.D. cathy@math.uh.edu

More information

1/24/2008. Review of Statistical Inference. C.1 A Sample of Data. C.2 An Econometric Model. C.4 Estimating the Population Variance and Other Moments

1/24/2008. Review of Statistical Inference. C.1 A Sample of Data. C.2 An Econometric Model. C.4 Estimating the Population Variance and Other Moments /4/008 Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University C. A Sample of Data C. An Econometric Model C.3 Estimating the Mean of a Population C.4 Estimating the Population

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

ECON 4160, Autumn term Lecture 1

ECON 4160, Autumn term Lecture 1 ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least

More information

Chapter 2 Inferences in Regression and Correlation Analysis

Chapter 2 Inferences in Regression and Correlation Analysis Chapter 2 Inferences in Regression and Correlation Analysis 許湘伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 2 1 / 102 Inferences concerning the regression parameters

More information

Lectures 5 & 6: Hypothesis Testing

Lectures 5 & 6: Hypothesis Testing Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across

More information

y ˆ i = ˆ " T u i ( i th fitted value or i th fit)

y ˆ i = ˆ  T u i ( i th fitted value or i th fit) 1 2 INFERENCE FOR MULTIPLE LINEAR REGRESSION Recall Terminology: p predictors x 1, x 2,, x p Some might be indicator variables for categorical variables) k-1 non-constant terms u 1, u 2,, u k-1 Each u

More information

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Fall, 2013 Page 1 Random Variable and Probability Distribution Discrete random variable Y : Finite possible values {y

More information

Chapter 12 - Lecture 2 Inferences about regression coefficient

Chapter 12 - Lecture 2 Inferences about regression coefficient Chapter 12 - Lecture 2 Inferences about regression coefficient April 19th, 2010 Facts about slope Test Statistic Confidence interval Hypothesis testing Test using ANOVA Table Facts about slope In previous

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals (SW Chapter 5) Outline. The standard error of ˆ. Hypothesis tests concerning β 3. Confidence intervals for β 4. Regression

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

STA 302 H1F / 1001 HF Fall 2007 Test 1 October 24, 2007

STA 302 H1F / 1001 HF Fall 2007 Test 1 October 24, 2007 STA 302 H1F / 1001 HF Fall 2007 Test 1 October 24, 2007 LAST NAME: SOLUTIONS FIRST NAME: STUDENT NUMBER: ENROLLED IN: (circle one) STA 302 STA 1001 INSTRUCTIONS: Time: 90 minutes Aids allowed: calculator.

More information

Lectures on Simple Linear Regression Stat 431, Summer 2012

Lectures on Simple Linear Regression Stat 431, Summer 2012 Lectures on Simple Linear Regression Stat 43, Summer 0 Hyunseung Kang July 6-8, 0 Last Updated: July 8, 0 :59PM Introduction Previously, we have been investigating various properties of the population

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

Confidence Intervals, Testing and ANOVA Summary

Confidence Intervals, Testing and ANOVA Summary Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0

More information

EC2001 Econometrics 1 Dr. Jose Olmo Room D309

EC2001 Econometrics 1 Dr. Jose Olmo Room D309 EC2001 Econometrics 1 Dr. Jose Olmo Room D309 J.Olmo@City.ac.uk 1 Revision of Statistical Inference 1.1 Sample, observations, population A sample is a number of observations drawn from a population. Population:

More information

The Simple Linear Regression Model

The Simple Linear Regression Model The Simple Linear Regression Model Lesson 3 Ryan Safner 1 1 Department of Economics Hood College ECON 480 - Econometrics Fall 2017 Ryan Safner (Hood College) ECON 480 - Lesson 3 Fall 2017 1 / 77 Bivariate

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline

More information

STAT 285: Fall Semester Final Examination Solutions

STAT 285: Fall Semester Final Examination Solutions Name: Student Number: STAT 285: Fall Semester 2014 Final Examination Solutions 5 December 2014 Instructor: Richard Lockhart Instructions: This is an open book test. As such you may use formulas such as

More information

10/4/2013. Hypothesis Testing & z-test. Hypothesis Testing. Hypothesis Testing

10/4/2013. Hypothesis Testing & z-test. Hypothesis Testing. Hypothesis Testing & z-test Lecture Set 11 We have a coin and are trying to determine if it is biased or unbiased What should we assume? Why? Flip coin n = 100 times E(Heads) = 50 Why? Assume we count 53 Heads... What could

More information

Statistical Tests. Matthieu de Lapparent

Statistical Tests. Matthieu de Lapparent Statistical Tests Matthieu de Lapparent matthieu.delapparent@epfl.ch Transport and Mobility Laboratory, School of Architecture, Civil and Environmental Engineering, Ecole Polytechnique Fédérale de Lausanne

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Nonparametric Regression and Bonferroni joint confidence intervals. Yang Feng

Nonparametric Regression and Bonferroni joint confidence intervals. Yang Feng Nonparametric Regression and Bonferroni joint confidence intervals Yang Feng Simultaneous Inferences In chapter 2, we know how to construct confidence interval for β 0 and β 1. If we want a confidence

More information

HT Introduction. P(X i = x i ) = e λ λ x i

HT Introduction. P(X i = x i ) = e λ λ x i MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework

More information

ST505/S697R: Fall Homework 2 Solution.

ST505/S697R: Fall Homework 2 Solution. ST505/S69R: Fall 2012. Homework 2 Solution. 1. 1a; problem 1.22 Below is the summary information (edited) from the regression (using R output); code at end of solution as is code and output for SAS. a)

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 9 for Applied Multivariate Analysis Outline Addressing ourliers 1 Addressing ourliers 2 Outliers in Multivariate samples (1) For

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Multiple Regression. Dr. Frank Wood. Frank Wood, Linear Regression Models Lecture 12, Slide 1

Multiple Regression. Dr. Frank Wood. Frank Wood, Linear Regression Models Lecture 12, Slide 1 Multiple Regression Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 12, Slide 1 Review: Matrix Regression Estimation We can solve this equation (if the inverse of X

More information

Confidence intervals CE 311S

Confidence intervals CE 311S CE 311S PREVIEW OF STATISTICS The first part of the class was about probability. P(H) = 0.5 P(T) = 0.5 HTTHHTTTTHHTHTHH If we know how a random process works, what will we see in the field? Preview of

More information

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X. Estimating σ 2 We can do simple prediction of Y and estimation of the mean of Y at any value of X. To perform inferences about our regression line, we must estimate σ 2, the variance of the error term.

More information

Last few slides from last time

Last few slides from last time Last few slides from last time Example 3: What is the probability that p will fall in a certain range, given p? Flip a coin 50 times. If the coin is fair (p=0.5), what is the probability of getting an

More information

6. Multiple Linear Regression

6. Multiple Linear Regression 6. Multiple Linear Regression SLR: 1 predictor X, MLR: more than 1 predictor Example data set: Y i = #points scored by UF football team in game i X i1 = #games won by opponent in their last 10 games X

More information

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28 2. A Review of Some Key Linear Models Results Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics 510 1 / 28 A General Linear Model (GLM) Suppose y = Xβ + ɛ, where y R n is the response

More information

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.

More information

Correlation Analysis

Correlation Analysis Simple Regression Correlation Analysis Correlation analysis is used to measure strength of the association (linear relationship) between two variables Correlation is only concerned with strength of the

More information

Chapter 1 Linear Regression with One Predictor

Chapter 1 Linear Regression with One Predictor STAT 525 FALL 2018 Chapter 1 Linear Regression with One Predictor Professor Min Zhang Goals of Regression Analysis Serve three purposes Describes an association between X and Y In some applications, the

More information

review session gov 2000 gov 2000 () review session 1 / 38

review session gov 2000 gov 2000 () review session 1 / 38 review session gov 2000 gov 2000 () review session 1 / 38 Overview Random Variables and Probability Univariate Statistics Bivariate Statistics Multivariate Statistics Causal Inference gov 2000 () review

More information

A Bayesian Treatment of Linear Gaussian Regression

A Bayesian Treatment of Linear Gaussian Regression A Bayesian Treatment of Linear Gaussian Regression Frank Wood December 3, 2009 Bayesian Approach to Classical Linear Regression In classical linear regression we have the following model y β, σ 2, X N(Xβ,

More information

Ordinary Least Squares Regression

Ordinary Least Squares Regression Ordinary Least Squares Regression Goals for this unit More on notation and terminology OLS scalar versus matrix derivation Some Preliminaries In this class we will be learning to analyze Cross Section

More information

Chapter 1. Linear Regression with One Predictor Variable

Chapter 1. Linear Regression with One Predictor Variable Chapter 1. Linear Regression with One Predictor Variable 1.1 Statistical Relation Between Two Variables To motivate statistical relationships, let us consider a mathematical relation between two mathematical

More information

ECON The Simple Regression Model

ECON The Simple Regression Model ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:

More information

Note on Bivariate Regression: Connecting Practice and Theory. Konstantin Kashin

Note on Bivariate Regression: Connecting Practice and Theory. Konstantin Kashin Note on Bivariate Regression: Connecting Practice and Theory Konstantin Kashin Fall 2012 1 This note will explain - in less theoretical terms - the basics of a bivariate linear regression, including testing

More information

STAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow)

STAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow) STAT40 Midterm Exam University of Illinois Urbana-Champaign October 19 (Friday), 018 3:00 4:15p SOLUTIONS (Yellow) Question 1 (15 points) (10 points) 3 (50 points) extra ( points) Total (77 points) Points

More information

Problem 1 (20) Log-normal. f(x) Cauchy

Problem 1 (20) Log-normal. f(x) Cauchy ORF 245. Rigollet Date: 11/21/2008 Problem 1 (20) f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.6 0.8 4 2 0 2 4 Normal (with mean -1) 4 2 0 2 4 Negative-exponential x x f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.5

More information

Data Mining Stat 588

Data Mining Stat 588 Data Mining Stat 588 Lecture 02: Linear Methods for Regression Department of Statistics & Biostatistics Rutgers University September 13 2011 Regression Problem Quantitative generic output variable Y. Generic

More information

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October

More information

Semester 2, 2015/2016

Semester 2, 2015/2016 ECN 3202 APPLIED ECONOMETRICS 2. Simple linear regression B Mr. Sydney Armstrong Lecturer 1 The University of Guyana 1 Semester 2, 2015/2016 PREDICTION The true value of y when x takes some particular

More information

Lecture 11: Simple Linear Regression

Lecture 11: Simple Linear Regression Lecture 11: Simple Linear Regression Readings: Sections 3.1-3.3, 11.1-11.3 Apr 17, 2009 In linear regression, we examine the association between two quantitative variables. Number of beers that you drink

More information