Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
|
|
- Tyrone Fleming
- 5 years ago
- Views:
Transcription
1 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine. Estimation Review of distribution and density functions, parametric families. Examples: binomial, Poisson, gamma. Sufficiency, minimal sufficiency, the Rao-Blackwell theorem. Maximum likelihood estimation. Confidence intervals. Use of prior distributions and Bayesian inference. [5] Hypothesis testing Simple examples of hypothesis testing, null and alternative hypothesis, critical region, size, power, type I and type II errors, Neyman-Pearson lemma. Significance level of outcome. Uniformly most powerful tests. Likelihood ratio, and use of generalised likelihood ratio to construct test statistics for composite hypotheses. Examples, including t-tests and F -tests. Relationship with confidence intervals. Goodness-of-fit tests and contingency tables. [4] Linear models Derivation and joint distribution of maximum likelihood estimators, least squares, Gauss-Markov theorem. Testing hypotheses, geometric interpretation. Examples, including simple linear regression and one-way analysis of variance. Use of software. [7] 1
2 Contents IB Statistics (Theorems with proof) Contents 0 Introduction 3 1 Estimation Estimators Mean squared error Sufficiency Likelihood Confidence intervals Bayesian estimation Hypothesis testing Simple hypotheses Composite hypotheses Tests of goodness-of-fit and independence Goodness-of-fit of a fully-specified null distribution Pearson s Chi-squared test Testing independence in contingency tables Tests of homogeneity, and connections to confidence intervals Tests of homogeneity Confidence intervals and hypothesis tests Multivariate normal theory Multivariate normal distribution Normal random samples Student s t-distribution Linear models Linear models Simple linear regression Linear models with normal assumptions The F distribution Inference for β Simple linear regression Expected response at x Hypothesis testing Hypothesis testing Simple linear regression One way analysis of variance with equal numbers in each group
3 0 Introduction IB Statistics (Theorems with proof) 0 Introduction 3
4 1 Estimation IB Statistics (Theorems with proof) 1 Estimation 1.1 Estimators 1.2 Mean squared error 1.3 Sufficiency Theorem (The factorization criterion). T is sufficient for θ if and only if for some functions g and h. f X (x θ) = g(t (x), θ)h(x) Proof. We first prove the discrete case. Suppose f X (x θ) = g(t (x), θ)h(x). If T (x) = t, then f X T =t (x) = P θ(x = x, T (X) = t) P θ (T = t) = g(t (x), θ)h(x) g(t (y), θ)h(y) {y:t (y)=t} g(t, θ)h(x) = g(t, θ) h(y) = h(x) h(y) which doesn t depend on θ. So T is sufficient. The continuous case is similar. If f X (x θ) = g(t (x), θ)h(x), and T (x) = t, then f X T =t (x) = = y:t (y)=t g(t (x), θ)h(x) g(t (y), θ)h(y) dy g(t, θ)h(x) g(t, θ) h(y) dy = h(x), h(y) dy which does not depend on θ. Now suppose T is sufficient so that the conditional distribution of X T = t does not depend on θ. Then P θ (X = x) = P θ (X = x, T = T (x)) = P θ (X = x T = T (x))p θ (T = T (x)). The first factor does not depend on θ by assumption; call it h(x). Let the second factor be g(t, θ), and so we have the required factorisation. Theorem. Suppose T = T (X) is a statistic that satisfies f X (x; θ) f X (y; θ) Then T is minimal sufficient for θ. does not depend on θ if and only if T (x) = T (y). 4
5 1 Estimation IB Statistics (Theorems with proof) Proof. First we have to show sufficiency. We will use the factorization criterion to do so. Firstly, for each possible t, pick a favorite x t such that T (x t ) = t. Now let x X N and let T (x) = t. So T (x) = T (x t ). By the hypothesis, f X (x;θ) f X (x t:θ) does not depend on θ. Let this be h(x). Let g(t, θ) = f X(x t, θ). Then f X (x; θ) = f X (x t ; θ) f X(x; θ) = g(t, θ)h(x). f X (x t ; θ) So T is sufficient for θ. To show that this is minimal, suppose that S(X) is also sufficient. By the factorization criterion, there exist functions g S and h S such that Now suppose that S(x) = S(y). Then f X (x; θ) = g S (S(x), θ)h S (x). f X (x; θ) f X (y; θ) = g S(S(x), θ)h S (x) g S (S(y), θ)h S (y) = h S(x) h S (y). This means that the ratio f X(x;θ) f X (y;θ) does not depend on θ. By the hypothesis, this implies that T (x) = T (y). So we know that S(x) = S(y) implies T (x) = T (y). So T is a function of S. So T is minimal sufficient. Theorem (Rao-Blackwell Theorem). Let T be a sufficient statistic for θ and let θ be an estimator for θ with E( θ 2 ) < for all θ. Let ˆθ(x) = E[ θ(x) T (X) = T (x)]. Then for all θ, E[(ˆθ θ) 2 ] E[( θ θ) 2 ]. The inequality is strict unless θ is a function of T. Proof. By the conditional expectation formula, we have E(ˆθ) = E[E( θ T )] = E( θ). So they have the same bias. By the conditional variance formula, var( θ) = E[var( θ T )] + var[e( θ T )] = E[var( θ T )] + var(ˆθ). Hence var( θ) var(ˆθ). So mse( θ) mse(ˆθ), with equality only if var( θ T ) = Likelihood 1.5 Confidence intervals 1.6 Bayesian estimation 5
6 2 Hypothesis testing IB Statistics (Theorems with proof) 2 Hypothesis testing 2.1 Simple hypotheses Lemma (Neyman-Pearson lemma). Suppose H 0 : f = f 0, H 1 : f = f 1, where f 0 and f 1 are continuous densities that are nonzero on the same regions. Then among all tests of size less than or equal to α, the test with the largest power is the likelihood ratio test of size α. Proof. Under the likelihood ratio test, our critical region is { C = x : f } 1(x) f 0 (x) > k, where k is chosen such that α = P(reject H 0 H 0 ) = P(X C H 0 ) = C f 0(x) dx. The probability of Type II error is given by β = P(X C f 1 ) = f 1 (x) dx. Let C be the critical region of any other test with size less than or equal to α. Let α = P(X C f 0 ) and β = P(X C f 1 ). We want to show β β. We know α α, ie f 0 (x) dx f 0 (x) dx. C C Also, on C, we have f 1 (x) > kf 0 (x), while on C we have f 1 (x) kf 0 (x). So f 1 (x) dx k f 0 (x) dx C C C C f 1 (x) dx k f 0 (x) dx. C C C C Hence β β = f 1 (x) dx f 1 (x) dx C C = f 1 (x) dx + f 1 (x) dx C C C C f 1 (x) dx f 1 (x) dx C C C C = f 1 (x) dx f 1 (x) dx C C C C k f 0 (x) dx k f 0 (x) dx C C C C { } = k f 0 (x) dx + f 0 (x) dx C C C C { } k f 0 (x) dx + f 0 (x) dx C C C C = k(α α) 0. C 6
7 2 Hypothesis testing IB Statistics (Theorems with proof) C C C C C (f 1 kf 0) β /H 1 C C C (f 1 kf 0) C C α /H 0 β/h 1 α/h Composite hypotheses Theorem (Generalized likelihood ratio theorem). Suppose Θ 0 Θ 1 and Θ 1 Θ 0 = p. Let X = (X 1,, X n ) with all X i iid. Then if H 0 is true, as n, 2 log Λ X (H 0 : H 1 ) χ 2 p. If H 0 is not true, then 2 log Λ tends to be larger. We reject H 0 if 2 log Λ > c, where c = χ 2 p(α) for a test of approximately size α. 2.3 Tests of goodness-of-fit and independence Goodness-of-fit of a fully-specified null distribution Pearson s Chi-squared test Testing independence in contingency tables 2.4 Tests of homogeneity, and connections to confidence intervals Tests of homogeneity Confidence intervals and hypothesis tests Theorem. (i) Suppose that for every θ 0 Θ there is a size α test of H 0 : θ = θ 0. Denote the acceptance region by A(θ 0 ). Then the set I(X) = {θ : X A(θ)} is a 100(1 α)% confidence set for θ. (ii) Suppose I(X) is a 100(1 α)% confidence set for θ. Then A(θ 0 ) = {X : θ 0 I(X)} is an acceptance region for a size α test of H 0 : θ = θ 0. Proof. First note that θ 0 I(X) iff X A(θ 0 ). For (i), since the test is size α, we have P(accept H 0 H 0 is true) = P(X A ( θ 0 ) θ = θ 0 ) = 1 α. 7
8 2 Hypothesis testing IB Statistics (Theorems with proof) And so P(θ 0 I(X) θ = θ 0 ) = P(X A(θ 0 ) θ = θ 0 ) = 1 α. For (ii), since I(X) is a 100(1 α)% confidence set, we have P (θ 0 I(X) θ = θ 0 ) = 1 α. So P(X A(θ 0 ) θ = θ 0 ) = P(θ I(X) θ = θ 0 ) = 1 α. 2.5 Multivariate normal theory Multivariate normal distribution Proposition. (i) If X N n (µ, Σ), and A is an m n matrix, then AX N m (Aµ, AΣA T ). (ii) If X N n (0, σ 2 I), then Proof. X 2 σ 2 = XT X σ 2 = X 2 i σ 2 χ2 n. Instead of writing X 2 /σ 2 χ 2 n, we often just say X 2 σ 2 χ 2 n. (i) See example sheet 3. (ii) Immediate from definition of χ 2 n. Proposition. Let X N n (µ, Σ). We split X up into two parts: X = where X i is a n i 1 column vector and n 1 + n 2 = n. Similarly write ( ) ( ) µ1 Σ11 Σ µ =, Σ = 12, µ 2 Σ 21 Σ 22 where Σ ij is an n i n j matrix. Then (i) X i N ni (µ i, Σ ii ) (ii) X 1 and X 2 are independent iff Σ 12 = 0. Proof. (i) See example sheet 3. (ii) Note that by symmetry of Σ, Σ 12 = 0 if and only if Σ 21 = 0. From ( ), M X (t) = exp(t T µ+ 1 2 tt Σt) for each t R n. We write t = ( X1 X 2 ), Then the mgf is equal to M X (t) = exp (t T1 µ 1 + t T2 Σ 11 t tt2 Σ 22 t tt1 Σ 12 t ) tt2 Σ 21 t 1. From (i), we know that M Xi (t i ) = exp(t T i µ i tt i Σ iit i ). So M X (t) = M X1 (t 1 )M X2 (t 2 ) for all t if and only if Σ 12 = 0. ( t1 t 2 ). 8
9 2 Hypothesis testing IB Statistics (Theorems with proof) Proposition. When Σ is a positive definite, then X has pdf ( ) n 1 exp 2π f X (x; µ, Σ) = 1 Σ Normal random samples [ 1 2 (x µ)t Σ 1 (x µ) Theorem (Joint distribution of X and SXX ). Suppose X 1,, X n are iid N(µ, σ 2 ) and X = 1 n Xi, and S XX = (X i X) 2. Then (i) X N(µ, σ 2 /n) (ii) S XX /σ 2 χ 2 n 1. (iii) X and SXX are independent. Proof. We can write the joint density as X N n (µ, σ 2 I), where µ = (µ, µ,, µ). Let A be an n n orthogonal matrix with the first row all 1/ n (the other rows are not important). One possible such matrix is A = 1 n 1 n 1 n 1 1 n n n(n 1) n(n 1) n(n 1) n(n 1) Now define Y = AX. Then We have Y N n (Aµ, Aσ 2 IA T ) = N n (Aµ, σ 2 I). Aµ = ( nµ, 0,, 0) T. ]. (n 1) n(n 1) So Y 1 N( nµ, σ 2 ) and Y i N(0, σ 2 ) for i = 2,, n. Also, Y 1,, Y n are independent, since the covariance matrix is every non-diagonal term 0. But from the definition of A, we have Y 1 = 1 n n i=1 X i = n X. So n X N( nµ, σ 2 ), or X N(µ, σ 2 /n). Also Y Yn 2 = Y T Y Y1 2 = X T A T AX Y1 2 = X T X n X 2 n = Xi 2 n X 2 = i=1 n (X i X) 2 i=1 = S XX. So S XX = Y Y 2 n σ 2 χ 2 n 1. Finally, since Y 1 and Y 2,, Y n are independent, so are X and S XX. 9
10 2 Hypothesis testing IB Statistics (Theorems with proof) 2.6 Student s t-distribution Proposition. If k > 1, then E k (T ) = 0. If k > 2, then var k (T ) = k k 2. If k = 2, then var k (T ) =. In all other cases, the values are undefined. In particular, the k = 1 case, this is known as the Cauchy distribution, and has undefined mean and variance. 10
11 3 Linear models IB Statistics (Theorems with proof) 3 Linear models 3.1 Linear models Proposition. The least squares estimator satisfies 3.2 Simple linear regression X T X ˆβ = X T Y. (3) Theorem (Gauss Markov theorem). In a full rank linear model, let ˆβ be the least squares estimator of β and let β be any other unbiased estimator for β which is linear in the Y i s. Then var(t T ˆβ) var(t T β ). for all t R p. We say that ˆβ is the best linear unbiased estimator of β (BLUE). Proof. Since β is linear in the Y i s, β = AY for some p n matrix A. Since β is an unbiased estimator, we must have E[β ] = β. However, since β = AY, E[β ] = AE[Y] = AXβ. So we must have β = AXβ. Since this holds for any β, we must have AX = I p. Now cov(β ) = E[(β β)(β β) T ] Since AXβ = β, this is equal to = E[(AY β)(ay β) T ] = E[(AXβ + Aε β)(axβ + Aε β) T ] = E[Aε(Aε) T ] = A(σ 2 I)A T = σ 2 AA T. Now let β ˆβ = (A (X T X) 1 X T )Y = BY, for some B. Then BX = AX (X T X 1 )X T X = I p I p = 0. By definition, we have AY = BY + (X T X) 1 X T Y, and this is true for all Y. So A = B + (X T X) 1 X T. Hence cov(β ) = σ 2 AA T = σ 2 (B + (X T X) 1 X T )(B + (X T X) 1 X T ) T = σ 2 (BB T + (X T X) 1 ) = σ 2 BB T + cov(ˆβ). Note that in the second line, the cross-terms disappear since BX = 0. So for any t R p, we have var(t T β ) = t T cov(β )t = t T cov(ˆβ)t + t T BB T tσ 2 = var(t T ˆβ) + σ 2 B T t 2 var(t T ˆβ). 11
12 3 Linear models IB Statistics (Theorems with proof) Taking t = (0,, 1, 0,, 0) T with a 1 in the ith position, we have var( ˆβ i ) var(β i ). 3.3 Linear models with normal assumptions Proposition. Under normal assumptions the maximum likelihood estimator for a linear model is ˆβ = (X T X) 1 X T Y, which is the same as the least squares estimator. Lemma. (i) If Z N n (0, σ 2 I) and A is n n, symmetric, idempotent with rank r, then Z T AZ σ 2 χ 2 r. (ii) For a symmetric idempotent matrix A, rank(a) = tr(a). Proof. (i) Since A is idempotent, A 2 = A by definition. So eigenvalues of A are either 0 or 1 (since λx = Ax = A 2 x = λ 2 x). (ii) Since A is also symmetric, it is diagonalizable. So there exists an orthogonal Q such that Λ = Q T AQ = diag(λ 1,, λ n ) = diag(1,, 1, 0,, 0) with r copies of 1 and n r copies of 0. Let W = Q T Z. So Z = QW. Then W N n (0, σ 2 I), since cov(w) = Q T σ 2 IQ = σ 2 I. Then Z T AZ = W T Q T AQW = W T ΛW = rank(a) = rank(λ) = tr(λ) = tr(q T AQ) = tr(aq T Q) = tr A Theorem. For the normal linear model Y N n (Xβ, σ 2 I), (i) ˆβ N p (β, σ 2 (X T X) 1 ) (ii) RSS σ 2 χ 2 n p, and so ˆσ 2 σ2 n χ2 n p. (iii) ˆβ and ˆσ 2 are independent. Proof. r wi 2 χ 2 r. i=1 12
13 3 Linear models IB Statistics (Theorems with proof) We have ˆβ = (X T X) 1 X T Y. Call this CY for later use. Then ˆβ has a normal distribution with mean and covariance So (X T X) 1 X T (Xβ) = β (X T X) 1 X T (σ 2 I)[(X T X) 1 X T ] T = σ 2 (X T X) 1. ˆβ N p (β, σ 2 (X T X) 1 ). Our previous lemma says that Z T AZ σ 2 χ 2 r. So we pick our Z and A so that Z T AZ = RSS, and r, the degrees of freedom of A, is n p. Let Z = Y Xβ and A = (I n P ), where P = X(X T X) 1 X T. We first check that the conditions of the lemma hold: Since Y N n (Xβ, σ 2 I), Z = Y Xβ N n (0, σ 2 I). Since P is idempotent, I n P also is (check!). We also have rank(i n P ) = tr(i n P ) = n p. Therefore the conditions of the lemma hold. To get the final useful result, we want to show that the RSS is indeed Z T AZ. We simplify the expressions of RSS and Z T AZ and show that they are equal: Z T AZ = (Y Xβ) T (I n P )(Y Xβ) = Y T (I n P )Y. Noting the fact that (I n P )X = 0. Writing R = Y Ŷ = (I n P )Y, we have RSS = R T R = Y T (I n P )Y, using the symmetry and idempotence of I n P. Hence RSS = Z T AZ σ 2 χ 2 n p. Then Let V = ˆσ 2 = RSS n ( ) ˆβ = DY, where D = R σ2 n χ2 n p. ( C I n P Since Y is multivariate, V is multivariate with ) is a (p + n) n matrix. cov(v ) = Dσ 2 ID T ( ) = σ 2 CC T C(I n P ) T (I n P )C T (I n P )(I n P ) T ( ) = σ 2 CC T C(I n P ) (I n P )C T (I n P ) ( ) = σ 2 CC T 0 0 I n P Using C(I n P ) = 0 (since (X T X) 1 X T (I n P ) = 0 since (I n P )X = 0 check!). Hence ˆβ and R are independent since the off-diagonal covariant terms are 0. So ˆβ and RSS = R T R are independent. So ˆβ and ˆσ 2 are independent. 13
14 3 Linear models IB Statistics (Theorems with proof) 3.4 The F distribution Proposition. If X F m,n, then 1/X F n,m. 3.5 Inference for β 3.6 Simple linear regression 3.7 Expected response at x 3.8 Hypothesis testing Hypothesis testing Lemma. Suppose Z N n (0, σ 2 I n ), and A 1 and A 2 are symmetric, idempotent n n matrices with A 1 A 2 = 0 (i.e. they are orthogonal). Then Z T A 1 Z and Z T A 2 Z are independent. Proof. Let X i = A i Z, i = 1, 2 and ( ) W1 W = = W 2 Then W N 2n (( 0 0 ) ( A1 A 2 ) Z. ( )), σ 2 A1 0 0 A 2 since the off diagonal matrices are σ 2 A T 1 A 2 = A 1 A 2 = 0. So W 1 and W 2 are independent, which implies and are independent W T 1 W 1 = Z T A T 1 A 1 Z = Z T A 1 A 1 Z = Z T A 1 Z W T 2 W 2 = Z T A T 2 A 2 Z = Z T A 2 A 2 Z = Z T A 2 Z Simple linear regression One way analysis of variance with equal numbers in each group 14
Lecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationLecture 15. Hypothesis testing in the linear model
14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationChapter 8.8.1: A factorization theorem
LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationBIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84
Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationMIT Spring 2015
Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationRandom Vectors and Multivariate Normal Distributions
Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationSTA 2101/442 Assignment 3 1
STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationGeneral Linear Model: Statistical Inference
Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter 4), least
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationPart IB. Statistics. Year
Part IB Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2018 42 Paper 1, Section I 7H X 1, X 2,..., X n form a random sample from a distribution whose probability
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear
More informationPhysics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester
Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) 1 / 43 Outline 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear Regression Ordinary Least Squares Statistical
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More informationTesting a Normal Covariance Matrix for Small Samples with Monotone Missing Data
Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical
More informationThe Statistical Property of Ordinary Least Squares
The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationMaster s Written Examination - Solution
Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2
More informationAsymptotic Statistics-VI. Changliang Zou
Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous
More informationBIOS 2083 Linear Models c Abdus S. Wahed
Chapter 5 206 Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter
More informationMathematical statistics
October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:
More information3 Multiple Linear Regression
3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationRestricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model
Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives
More informationTMA4267 Linear Statistical Models V2017 (L10)
TMA4267 Linear Statistical Models V2017 (L10) Part 2: Linear regression: Parameter estimation [F:3.2], Properties of residuals and distribution of estimator for error variance Confidence interval and hypothesis
More informationBias Variance Trade-off
Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]
More informationRegression #5: Confidence Intervals and Hypothesis Testing (Part 1)
Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationTopic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form
Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in
More informationStatement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.
MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss
More informationBasic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:
8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationSampling Distributions
Merlise Clyde Duke University September 8, 2016 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);
More informationSTAT 540: Data Analysis and Regression
STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7
MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is
More informationSTA442/2101: Assignment 5
STA442/2101: Assignment 5 Craig Burkett Quiz on: Oct 23 rd, 2015 The questions are practice for the quiz next week, and are not to be handed in. I would like you to bring in all of the code you used to
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More informationStatistics. Statistics
The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationECE 275B Homework # 1 Solutions Version Winter 2015
ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationBayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More informationPhD Qualifying Examination Department of Statistics, University of Florida
PhD Qualifying xamination Department of Statistics, University of Florida January 24, 2003, 8:00 am - 12:00 noon Instructions: 1 You have exactly four hours to answer questions in this examination 2 There
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More information18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013
18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Jonathan Marchini Department of Statistics University of Oxford MT 2013 Jonathan Marchini (University of Oxford) BS2a MT 2013 1 / 27 Course arrangements Lectures M.2
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth
More informationDistributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31
Distributions of Quadratic Forms Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 31 Under the Normal Theory GMM (NTGMM), y = Xβ + ε, where ε N(0, σ 2 I). By Result 5.3, the NTGMM
More informationECE 275B Homework # 1 Solutions Winter 2018
ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationMultiple Linear Regression
Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationSo far our focus has been on estimation of the parameter vector β in the. y = Xβ + u
Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,
More informationRegression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin
Regression Review Statistics 149 Spring 2006 Copyright c 2006 by Mark E. Irwin Matrix Approach to Regression Linear Model: Y i = β 0 + β 1 X i1 +... + β p X ip + ɛ i ; ɛ i iid N(0, σ 2 ), i = 1,..., n
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationThis paper is not to be removed from the Examination Halls
~~ST104B ZA d0 This paper is not to be removed from the Examination Halls UNIVERSITY OF LONDON ST104B ZB BSc degrees and Diplomas for Graduates in Economics, Management, Finance and the Social Sciences,
More informationThe purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.
Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationSummary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1)
Summary of Chapter 7 (Sections 7.2-7.5) and Chapter 8 (Section 8.1) Chapter 7. Tests of Statistical Hypotheses 7.2. Tests about One Mean (1) Test about One Mean Case 1: σ is known. Assume that X N(µ, σ
More information1 Appendix A: Matrix Algebra
Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationBTRY 4090: Spring 2009 Theory of Statistics
BTRY 4090: Spring 2009 Theory of Statistics Guozhang Wang September 25, 2010 1 Review of Probability We begin with a real example of using probability to solve computationally intensive (or infeasible)
More informationProblem 1 (20) Log-normal. f(x) Cauchy
ORF 245. Rigollet Date: 11/21/2008 Problem 1 (20) f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.6 0.8 4 2 0 2 4 Normal (with mean -1) 4 2 0 2 4 Negative-exponential x x f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.5
More informationLecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices
Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationMath 423/533: The Main Theoretical Topics
Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester
More informationStatistical Inference: Estimation and Confidence Intervals Hypothesis Testing
Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationTABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1
TABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1 1.1 The Probability Model...1 1.2 Finite Discrete Models with Equally Likely Outcomes...5 1.2.1 Tree Diagrams...6 1.2.2 The Multiplication Principle...8
More informationEXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009
EAMINERS REPORT & SOLUTIONS STATISTICS (MATH 400) May-June 2009 Examiners Report A. Most plots were well done. Some candidates muddled hinges and quartiles and gave the wrong one. Generally candidates
More information