Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Size: px
Start display at page:

Download "Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:."

Transcription

1 MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss the problems with your peers but the final solutions should be your work. There is no specific deadline but you need to complete everything to get the grade. Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

2 Homework assignment Random variables and random vectors 1. Suppose Z 1, Z 2,..., Z n are independent, identically ditributed random variables having the Beta(1, q) distribution. a. Define X 1 = Z 1 X 2 = Z 2 (1 Z 1 ) X 3 = Z 3 (1 Z 2 )(1 Z 1 ). Find the joint distribution of X 1, X 2 and X 3. Consider taking logarithms. b. More generally, define i 1 X i = Z i (1 Z j ). for i = 1, 2,..., n. Find the joint distribution of (X 1, X 2,..., X n ). 2. Let X 1, X 2,..., X n, X n+1 be random variables such that E(X k ) = 0 for k = 1,..., n + 1 and covariance matrix Σ (a (n + 1) (n + 1) matrix). We would like to find the best linear predictor for X n+1 based on the variables X 1, X 2,..., X n. This means that we are looking for the linear combination ˆX n+1 = b 0 + b 1 X b n X n for which the expected square error j=1 E(X n+1 ˆX n+1 ) 2 will be as small as possible. Find the coefficients b 0, b 1,..., b n. Hint: Write the square error as a function of b 0, b 1,..., b n and use partial derivatives. 3. Let X and Y be random variables with density for x, y 0. f X,Y (x, y) = xe x(y+1) a. Find the conditional densities f X Y =y (x) and f Y X=x (y). b. Find E(X Y ) and E(Y X) and check that E(Xg(Y )) = E(E(X Y )g(y )) and E(Y g(x)) = E(E(Y X)g(X)) for an arbitrary bounded function g. 2

3 Homework assignment 4. The conditional variance of Y given (X 1, X 2,..., X n ) is defined as v(x 1,..., x n ) = var(y X 1 = x 1, X 2 = x 2,..., X n = x n ) = (y ψ(x 1,..., x n )) 2 f Y X=x (y) dy. where ψ(x 1,..., x n ) = E(Y X 1 = x 1,..., X n = x n ). We can interpret the conditional variance as a random variable given as a. Show that var(y X 1,..., X n ) = v(x 1, X 2,..., X n ). v(x 1, X 2,..., X n ) = E [ (Y E(Y X 1,..., X n )) 2 X1, X 2,..., X n ]. b. Convince yourself that var(y ) = E(var(Y X 1,..., X n )) + var(e(y X 1,..., X n )). Can you explain this formula in words? Multivariate normal distribution 5. Suppose Z is a random vector whose components are independent standard normal random variables and let A be a rectangular matrix such that AA T is invertible. Prove that the density of X = AZ + µ is still given by the formula ( 1 f X (x) = exp (2π) det(aa 1 ) n/2 T 2 (x µ)t (AA T ) 1 (x µ). ) 6. Suppose X is a mutivariate normal vector with expectation 0 and variance Σ. Write ( ) ( ) X1 Σ11 Σ X = and Σ = 12. X 2 Σ 21 Σ 22 Assume Σ is invertible. Compute the conditional density of X 2 given X 1 = x 1 by using the usual formula Hint: Use the inversion lemma ( Σ 1 = f X2 X 1 =x 1 (x 2 ) = f X(x) f X1 (x 1 ). (Σ 11 Σ 12 Σ 1 22 Σ 21 ) 1 (Σ 11 Σ 12 Σ 1 22 Σ 21 ) 1 Σ 12 Σ 1 22 Σ 1 22 Σ 21 (Σ 11 Σ 12 Σ 1 22 Σ 21 ) 1 (Σ 22 Σ 21 Σ 1 11 Σ 12 ) 1 Compare this proof to the slicker one using independence of linear transformations of multivariate normal vectors. Comment. ) 3

4 Homework assignment 7. Suppose X N p (µ, Σ) and that the matrix QΣQ T is an invertile q q matrix. Show that the conditional distribution of X given Qx = q is normal with conditional mean µ + ΣQ T (QΣQ T ) 1 (q Qµ) and conditional (singular) covariance matrix Σ ΣQ T (QΣQ T ) 1 QΣ. 8. Suppose X and Y are p-dimensional random vectors such that ( ) X N Y 2p (0, Σ) where the covariance matrix is of the form ( ) I ρ11 T Σ = ρ11 T I The matrix I represents the p p identity matrix, 1 = (1, 1,..., 1) T and ρ is a scalar constant such that ρ 1/ p(p 1). a. Compute E(X T X). b. Compute E(X T X Y). 9. If A and B are p p symmetric idempotent matrices of rank r and s and if AB = 0, show that, for X N p (0, σ 2 I) a. Show that b. Show that c. Show that X T AX/r X T BX/s F r,s. X T AX X T (A + B)X B(r/2, s/2). (p r)x T AX rx T (I A)X F r,p r. Central limit theorem 4

5 Homework assignment 10. The central limit theorem is also valid for vectors. For roulette we can code the outcomes in vectors ξ 1, ξ 2,... such that for i = 1, 2,..., 37 { 1 if in spin n the oucome is i 1 ξi n = 0 else. By central limit theorem one has ξ ξ n ne(ξ 1 ) n d N(0, Σ). a. How can one interpret the expressions ξ 1 + ξ n ne(ξ 1 ) n? b. Find E(ξ 1 ) and var(ξ 1 ) = Σ. c. Usually one would use the χ 2 -statistic to test whether the wheel is biased. One defines 36 χ 2 (n j np) 2 = np j=0 where p = 1/37, n j is the number of occurences of outcome j and n is the number of spins. Use b. to prove that the distribution of χ 2 is approximately χ 2 (36). Parameter estimation 11. The log-normal distribution has the density for x > 0. f X (x) = 1 2πσx e (log x µ)2 /(2σ 2 ) a. Assume that σ is known and you have i.i.d. observations X 1, X 2,..., X n. Find the maximum likelihood estimate for µ. b. Find the approximate standard error of your estimator. 12. The Pareto distribution with parameters α and λ has density for x > 0 where α, λ > 0. f(x, α, λ) = αλ α (λ + x) α+1 5

6 Homework assignment a. Write down the equations for the MLE of the parameters given i.i.d. observations x 1, x 2,..., x n. b. Compute the approximate standard error for the MLE od α. 13. Let X 1, X 2,..., X n be an i.i.d. sample from the inverse Gaussian distribution I(µ, τ) with density { τ 2πx exp τ } (x 3 µ)2, x > 0, τ > 0, µ > 0. 2xµ 2 The expectation of the inverse Gaussian distribution is E(X 1 ) = µ. Assume that all densities are smooth enough to apply the asymptotic theorems. a. (10) Find the MLE for (µ, τ) based on observations x 1,..., x n. b. (10) Compute the Fisher information matrix I(µ, τ). c. (5) Give a formula for the approximate 95% confidence interval for µ based on x 1, x 2,..., x n. 14. Suppose X = (X 1, X 2,..., X n ) N(µ, σ 2 Σ) where σ 2 is an unknown parameter and Σ is a known invertible matrix. a. Suppose the expectation µ is known and you have one observation X 1. How would you estimate σ 2? Is your estimate unbiased? What is the variance of the estimate you found? Hint: What is the distribution of Σ 1/2 X? b. How would you go about the questions in a. if µ was not known but you knew that all components of µ were the same? Hypothesis testing 15. We have observations X 1, X 2,..., X n from the normal distribution N(µ, σ 2 ). We would like to test H 0 : µ = 0 versus H 1 : µ 0. a. One can test H 0 at confidence level α in two ways: - H 0 is rejected if X > c for a suitable c. - One estimates µ in σ 2 and sets up a confidence interval. If the confidence interval does not cover 0 the null-hypothesis is rejected. Are the above tests the same? Comment. What is the answer if we assume that the parameter σ is known? 6

7 Homework assignment b. Find the likelohood ratio statistic for the above testing problem in both cases, when σ is known and when σ is unknown. 16. Bartlett s test is a commonly used test for equal variances. The testing problem assumes that all observations {x ij } for i = 1, 2,..., k and j = 1, 2,..., n i for each i are like independent random variables where X ij N(µ i, σ 2 i ). One tests versus H 0 : σ 2 1 = σ 2 2 =... = σ 2 k H 1 : the σ 2 i are not all equal Assume we have samples of size n i from the i-th population, i = 1, 2,..., k, and the usual variance estimates from each sample where s 2 i = 1 n i 1 s 2 1, s 2 2,..., s 2 k n i j=1 (x ij x i ) 2. Introduce the following notation ν j = n j 1 and ν = k ν i and s 2 = 1 ν k ν i s 2 i The Bartlett s test statistic M is defined by M = ν log s 2 k ν i log s 2 i. a. The approximate distribution of Bartlett s M is χ 2 (r). What is in your opinion r? Explain why. b. Assume that the maximum likelihood estimates for parameters µ i and σi 2 are ˆµ i = x i = 1 n i x ij and ˆσ i 2 = 1 n i (x ij x i ) 2 n i n i 7

8 Homework assignment for i = 1, 2,..., k. Write down the likelihood ratio statistic for the testing problem in question. What is its approximate distribution? Any similarity to Bartlett s test? Comment. Hint: If you assume σ 2 1 = σ 2 = = σ 2 k, the MLE estimates for µ i are still the means x i for i = 1, 2,..., k. 17. The one sample Wilcoxon test is used to test whether a continuous distribution is symmetric. On the basis of n i.i.d. observations X 1, X 2,..., X n from an unknown continuous distribution F one tests the hypothesis H 0 : F (x) = 1 F ( x) for all x versus H 1 : F (x) < 1 F ( x) for some x. Let R i be the rank of X i among the X 1, X 2,..., X n. The sign test is based on the statistic n W = 1(X i > 0)R i i.e. the sum of ranks of positive X i s. a. Show that if H 0 is true W has a distribution that does not depend on F. b. Show that n W = 1( X i X j ). i,j=1 c. Show that if H 0 is true then E(W ) = n(n + 1)/4. d. Compute the variance of W. e. How would you find critical values for testing H 0? 8

9 Homework assignment On the following pages you will find the take-home finals from previous years. Do two of the three as part of your homework. 9

10 MATHEMATICAL STATISTICS Final take-home examination April 8 th -April 15 th, 2013 Instructions You do not need to edit the solutions. Just make sure the handwriting is legible. The final solutions should be your work. The deadline for completion is March 15th, 2013 by 4pm. Turn in your solutions to Petra Vranješ. For any questions contact me by . Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

11 1. (25) Suppose a population of size N is divided into K = N/M groups of size M. We select a sample of size km the following way: First we select k groups out of K groups by simple random sampling with replacement. We then select m units in each group selected on the first step by simple random sample with replacement. The estimate of the population mean is the average Ȳ of the sample. Let µ i be the population average in the i-the group for i = 1, 2,..., K. Let σu 2 = 1 K (µ i µ) 2, K where µ = K µ i/k. Let σ 2 w = 1 N K M (y ij µ i ) 2, where y ij denotes the value of the variable for the j-the unit in the i-th group. j=1 a. Let k = 1. Show that we can write the estimator as K Ȳ = I i Y i, where { 1 if the i-th group is selected. I i = 0 otherwise and var(y i ) = σ 2 i /m. Argue that it is reasonable to assume that Y i and I i are all independent. Let σ 2 i be the population variance for the i-th subgroup. Compute var(ȳ ). b. If we repeat the procedure we get independent estimators Ȳ1, Ȳ2,..., Ȳk, and estimate the population average by Show that Ȳ = 1 k k Ȳ k. var(ȳ ) = σ2 u k + σ2 w km. Argue that this expression is the variance of the estimator described in the introduction. 2

12 c. The assumption that we sample with replacement is unrealistic. Let k = 1 and assume that the sample of size m is selected by simple random sample without replacement. Argue that K Ȳ = I i Y i, where { 1 if we select the ith subgroup. I i = 0 otherwise Compute the variance of the estimator in this case. d. Assume that the k groups are selected by simple random sample without replacement. In this case the estimator is Ȳ = 1 k K I i Y i, where { 1 if we select the ith subgroup. I i = 0 otherwise Argue that it is reasonable to assume that I 1,..., I K and Y 1,..., Y K are independent. Compute the standard error of the estimator. e. Explain why the sampling distribution in d. is approximately normal. 3

13 2. (25) Suppose {p(x, θ), θ Θ R k } is a (regular) family of distributions. Define the vector valued score function s as the column vector with components s(x, θ) = θ log(p(x, θ)) = grad(log(p(x, θ)). and the Fisher information matrix as I(θ) = var(s). Remark: If p(x, θ) = 0 define log (p(x, θ)) = 0. a. Let t(x) be an unbiased estimator of θ based on the likelihood function, i.e. Prove that Deduce that cov(s, t) = I. E θ (t(x)) = θ. E(s) = 0 and E(st T ) = I. Remark: Make liberal assumptions about interchanging integration and differentiation. b. Let a, c be two arbitrary k dimensional vectors. Prove that corr 2 ( a T t, c T s ) = (a T c) 2 a T var(t)a c T I(θ)c. The correlation coefficient squared is always less or equal 1. Maximize the expression for the correlation coefficient over c and deduce the Rao-Cramér inequality. 4

14 3. (25) Suppose X 1, X 2,..., X n are i.i.d. observations from a multivariate normal distribution N(µ, Σ) where Σ is known. Further assume that R is a given matrix and r a given vector. Use the likelihood ratio procedure to produce a test statistic for H 0 : Rµ = r vs. H 1 : Rµ r. Give explicit formulae for the test statistic and the critical values. 5

15 4. (25) Let Y = Xβ + ɛ be a linear model where we assume E(ɛ) = 0 in var(ɛ) = σ 2 Σ for a known invertible matrix Σ. a. Show that the BLUE for β is given by ˆβ = (X T Σ 1 X) 1 X T Σ 1 Y. Assume that X T Σ 1 X is invertible and use the Gauss-Markov theorem. b. Assume that the linear model is of the form Y kl = α + βx kl + u k + ɛ kl, k = 1, 2,..., K in l = 1, 2,..., L k where ɛ kl are N(0, σ 2 ) and u k are N(0, τ 2 ) and all random quantities are independent. Assume that the ratio τ 2 /σ 2 is known. Show that the BLUE is given by where ( ˆαˆβ ) = ( wk wk x k wk x k S xx + w k x 2 k ) 1 ( wk ȳ k w k = L k σ 2 /(σ 2 + L k τ 2 ) S xx = (x kl x k ) 2 k l S xy = (x kl x k )(y kl ȳ k ). k l S xy + w k x k ȳ k ), Hint: For c 1/n one has (I + c11 T ) 1 = I c(1 + nc) 1 11 T where 1 T = (1, 1,..., 1). c. What would you do if the ratio τ 2 /σ 2 were unknown? d. How would you test the hypothesis H 0 : β = 0 versus H 1 : β 0? What is the distribution of the test statistic under the null-hypothesis? 6

16 MATHEMATICAL STATISTICS Final take-home examination May 7 th -May 16 th, 2014 Instructions You do not need to edit the solutions. Just make sure the handwriting is legible. The final solutions should be your work. The deadline for completion is May 16th, 2014 by 4pm. Turn in your solutions to Petra Vranješ. For any questions contact me by . Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

17 1. (25) Suppose a population of size N is divided into K = N/M groups of size M. We select a sample of size n = km the following way: First we select k groups out of K groups by simple random sampling. We then select m units in each group selected on the first step by simple random sampling. The estimate of the population mean is the average Ȳ of the sample. Let µ i be the population average in the i-th group for i = 1, 2,..., K, and let σ 2 i be the population variance in the i-th group for i = 1, 2,..., K. a. (10) Show that we can write the estimator as Ȳ = 1 k K Ȳ i I i, where { 1 if the i-th group is selected. I i = 0 otherwise and Ȳi is the sample average in the i-th group for i = 1, 2,..., K. Argue that it is reasonable to assume that the random variables Ȳ1,..., ȲK are independent and independent from I 1,..., I K. Show that Ȳ is an unbiased estimator of the population mean µ and show that the variance of Ȳ is var(ȳ ) = M m k(m 1)m 1 K K σi 2 + K k k(k 1) 1 K b. (15) Suggest an estimate for the quantity K (µ i µ) 2. σ 2 b = 1 K K (µ k µ) 2 = 1 K K µ 2 k µ 2. Is your estimate unbiased? Can you modify it to be an unbiased estimate? 2

18 2. (25) Suppose Θ 1, Θ 2,..., Θ n are i.i.d. random variables with values in [0, 2π) each having the von Mises density f(θ; µ, k) = 1 exp (k cos(θ µ)) 2πI 0 (k) for 0 θ < 2π where k 0 and µ [0, 2π] are the unknown parameters. I 0 is the modified Bessel function of the first kind and order 0. Suppose you have an i.i.d. sample θ 1, θ 2,..., θ n. a. (10) Let ν = (cos(µ), sin(µ)). Derive the MLE for ν. b. (5) Describe how you would find the MLE for k. c. (10) Let a = k cos(µ) and b = k sin(µ) and let â n and ˆb n be their respective MLE based on n i.i.d. observations. Show that the asymptotic distribution of n(ân a, ˆb n b) is bivariate normal N(0, Σ 1 ), where Σ is the covariance matrix of the random vector (cos(θ 1 ), sin(θ 1 )). 3

19 3. (25) Suppose X 1, X 2,..., X n are i.i.d. observations from a multivariate normal distribution N(µ, Σ) where Σ is known. Further assume that a is a given vector. Use the likelihood ratio procedure to produce a test statistic for H 0 : a T µ = 0 vs. H 1 : a T µ 0 a. (15) Give explicit formulae for the test statistic and the critical values. b. (10) What changes if the covariance matrix of the X i is of the form σ 2 Σ with unknown σ 2 and known Σ? 4

20 4. (25) Let Y = Xβ +ɛ be a linear model where we assume E(ɛ) = 0 in var(ɛ) = σ 2 Σ for a known invertible matrix Σ. a. (10) Show that the BLUE for β is given by ˆβ = (X T Σ 1 X) 1 X T Σ 1 Y. Assume that X T Σ 1 X is invertible and use the Gauss-Markov theorem. b. (15) Assume that the linear model is of the form Y kl = α + βx kl + u k + ɛ kl, k = 1, 2,..., K in l = 1, 2,..., L k where ɛ kl are N(0, σ 2 ) and u k are N(0, τ 2 ) and all random quantities are independent. Assume that the ratio τ 2 /σ 2 is known. Show that the BLUE is given by where ( ˆαˆβ ) = ( wk wk x k wk x k S xx + w k x 2 k ) 1 ( wk ȳ k w k = L k σ 2 /(σ 2 + L k τ 2 ) S xx = (x kl x k ) 2 k l S xy = (x kl x k )(y kl ȳ k ). k l S xy + w k x k ȳ k ), Hint: For c 1/n one has (I + c11 T ) 1 = I c(1 + nc) 1 11 T where 1 T = (1, 1,..., 1). 5

21 MATHEMATICAL STATISTICS Final take-home examination May 4 th -May 12 th, 2015 Instructions You do not need to edit the solutions. Just make sure the handwriting is legible. The final solutions should be your work. The deadline for completion is May 12th, 2015 by 4pm. Turn in your solutions to Petra Vranješ. For any questions contact me by or call me at Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

22 1. (25) Suppose a population of size N is divided into K = N/M groups of size M. We select a sample of size km the following way: First we select k groups out of K groups by simple random sampling. We then select m units in each group selected on the first step by simple random sampling. Samples in selected groups are assumed to be independent. Denote by µ the population average and by µ i the population average in the i-th group. Similarly denote by σi 2 the population variance in the i-th group. a. Suggest an estimate for the population average µ. Is the estimate unbiased? b. Derive the formula for the standard error of the estimate from a. c. How would you estimate the quantity γ = K (µ i µ) 2? Is the estimate you suggest unbiased? d. Give an estimate of the standard error based on the sample. 2

23 2. (25) Assume the data pairs (y 1, z 1 ),..., (y n, z n ) are an i.i.d. sample from the distribution with density for y > 0 and σ > 0. f(y, z, θ, σ) = e y 1 e (z θy) 2yσ 2 2πyσ a. Find the maximum likelihood estimators of θ and σ 2. Are the estimators unbiased? b. Find the exact standard errors of ˆθ and ˆσ 2. c. Compute the Fisher information matrix. d. Find the standard errors of the maximum likelihood estimators using the Fisher information matrix. Comment on your findings. 2 3

24 3. (25) Assume that the data x 1, x 2,..., x n are an i.i.d. sample from the multivariate normal distribution of the form (( ) ( )) µ (1) Σ11 Σ X 1 N µ (2), 12. Σ 21 Σ 22 Assume that the parameters µ and Σ are unknown. Assume the following theorem: If A(p p) is a given symmetric positive definite matrix then the positive definite matrix Σ that maximizes the expression 1 exp ( 12 det(σ) Tr ( Σ 1 A )) n/2 is the matrix Σ = 1 n A. The testing problem is H 0 : Σ 12 = 0 versus H 1 : Σ a. Find the maximum likelihood estimates of µ and Σ in the unconstrained case. b. Find the maximum likelihood estimates of µ and Σ in the constrained case. c. Write the likelihood ratio statistic for the testing problem as explicitly as possible. d. What can you say about the distribution of the likelihood ratio statistic? 4

25 4. (25) Assume the regression model Y i1 = α + βx i1 + ɛ i Y i2 = α + βx i2 + η i for i = 1, 2,..., n. In other words the observation come in pairs. Assume that E(ɛ i ) = E(η i ) = 0, var(ɛ i ) = var(η i ) = σ 2 and corr(ɛ i, η i ) = ρ ( 1, 1). Assume that the pairs (ɛ 1, η 1 ),..., (ɛ n, η n ) are uncorrelated. Furthermore assume that n x i1 x i2 = 0. a. Assume that ρ is known. Find the best linear unbiased estimate of the regression parameters α and β. Find an unbiased estimator of σ 2. b. Assume that ρ is unknown and let ˆα and ˆβ be the ordinary least squares estimators of the regression parameters. Compute the standard errors of the two estimators. c. Let ˆɛ i and ˆη i be the residuals from ordinary least squares. Express [ n ) 2 E (ˆɛ ] i + ˆη i 2 and with the elements of the hat matrix H. [ n ] E ˆɛ iˆη i d. Give an estimate of var(ˆα) and var( ˆβ). Are the estimators unbiased? itemize 5

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Take-home final examination February 1 st -February 8 th, 019 Instructions You do not need to edit the solutions Just make sure the handwriting is legible The final solutions should

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University. Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

Lecture 15. Hypothesis testing in the linear model

Lecture 15. Hypothesis testing in the linear model 14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

BIOS 2083 Linear Models c Abdus S. Wahed

BIOS 2083 Linear Models c Abdus S. Wahed Chapter 5 206 Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter

More information

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

STAT 540: Data Analysis and Regression

STAT 540: Data Analysis and Regression STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

General Linear Model: Statistical Inference

General Linear Model: Statistical Inference Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter 4), least

More information

2018 2019 1 9 sei@mistiu-tokyoacjp http://wwwstattu-tokyoacjp/~sei/lec-jhtml 11 552 3 0 1 2 3 4 5 6 7 13 14 33 4 1 4 4 2 1 1 2 2 1 1 12 13 R?boxplot boxplotstats which does the computation?boxplotstats

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Theory of Statistics.

Theory of Statistics. Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased

More information

FENG CHIA UNIVERSITY ECONOMETRICS I: HOMEWORK 4. Prof. Mei-Yuan Chen Spring 2008

FENG CHIA UNIVERSITY ECONOMETRICS I: HOMEWORK 4. Prof. Mei-Yuan Chen Spring 2008 FENG CHIA UNIVERSITY ECONOMETRICS I: HOMEWORK 4 Prof. Mei-Yuan Chen Spring 008. Partition and rearrange the matrix X as [x i X i ]. That is, X i is the matrix X excluding the column x i. Let u i denote

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables. Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is

More information

where x and ȳ are the sample means of x 1,, x n

where x and ȳ are the sample means of x 1,, x n y y Animal Studies of Side Effects Simple Linear Regression Basic Ideas In simple linear regression there is an approximately linear relation between two variables say y = pressure in the pancreas x =

More information

Matrix Approach to Simple Linear Regression: An Overview

Matrix Approach to Simple Linear Regression: An Overview Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix

More information

THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay. Solutions to Final Exam

THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay. Solutions to Final Exam THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay Solutions to Final Exam 1. (13 pts) Consider the monthly log returns, in percentages, of five

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

HT Introduction. P(X i = x i ) = e λ λ x i

HT Introduction. P(X i = x i ) = e λ λ x i MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013 18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science

UNIVERSITY OF TORONTO Faculty of Arts and Science UNIVERSITY OF TORONTO Faculty of Arts and Science December 2013 Final Examination STA442H1F/2101HF Methods of Applied Statistics Jerry Brunner Duration - 3 hours Aids: Calculator Model(s): Any calculator

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Linear Methods for Prediction

Linear Methods for Prediction Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we

More information

WLS and BLUE (prelude to BLUP) Prediction

WLS and BLUE (prelude to BLUP) Prediction WLS and BLUE (prelude to BLUP) Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 21, 2018 Suppose that Y has mean X β and known covariance matrix V (but Y need

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Masters Comprehensive Examination Department of Statistics, University of Florida

Masters Comprehensive Examination Department of Statistics, University of Florida Masters Comprehensive Examination Department of Statistics, University of Florida May 6, 003, 8:00 am - :00 noon Instructions: You have four hours to answer questions in this examination You must show

More information

REGRESSION WITH SPATIALLY MISALIGNED DATA. Lisa Madsen Oregon State University David Ruppert Cornell University

REGRESSION WITH SPATIALLY MISALIGNED DATA. Lisa Madsen Oregon State University David Ruppert Cornell University REGRESSION ITH SPATIALL MISALIGNED DATA Lisa Madsen Oregon State University David Ruppert Cornell University SPATIALL MISALIGNED DATA 10 X X X X X X X X 5 X X X X X 0 X 0 5 10 OUTLINE 1. Introduction 2.

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your

More information

Chapter 4: Asymptotic Properties of the MLE (Part 2)

Chapter 4: Asymptotic Properties of the MLE (Part 2) Chapter 4: Asymptotic Properties of the MLE (Part 2) Daniel O. Scharfstein 09/24/13 1 / 1 Example Let {(R i, X i ) : i = 1,..., n} be an i.i.d. sample of n random vectors (R, X ). Here R is a response

More information

For more information about how to cite these materials visit

For more information about how to cite these materials visit Author(s): Kerby Shedden, Ph.D., 2010 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Share Alike 3.0 License: http://creativecommons.org/licenses/by-sa/3.0/

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean.

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean. Economics 573 Problem Set 5 Fall 00 Due: 4 October 00 1. In random sampling from any population with E(X) = and Var(X) =, show (using Chebyshev's inequality) that sample mean converges in probability to..

More information

2.1 Linear regression with matrices

2.1 Linear regression with matrices 21 Linear regression with matrices The values of the independent variables are united into the matrix X (design matrix), the values of the outcome and the coefficient are represented by the vectors Y and

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

UNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017

UNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017 UNIVERSITY OF MASSACHUSETTS Department of Mathematics and Statistics Basic Exam - Applied Statistics Tuesday, January 17, 2017 Work all problems 60 points are needed to pass at the Masters Level and 75

More information

Advanced Econometrics I

Advanced Econometrics I Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

Linear Methods for Prediction

Linear Methods for Prediction This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

Hypothesis Testing. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA

Hypothesis Testing. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA Hypothesis Testing Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA An Example Mardia et al. (979, p. ) reprint data from Frets (9) giving the length and breadth (in

More information

CHAPTER 2 SIMPLE LINEAR REGRESSION

CHAPTER 2 SIMPLE LINEAR REGRESSION CHAPTER 2 SIMPLE LINEAR REGRESSION 1 Examples: 1. Amherst, MA, annual mean temperatures, 1836 1997 2. Summer mean temperatures in Mount Airy (NC) and Charleston (SC), 1948 1996 Scatterplots outliers? influential

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

The Statistical Property of Ordinary Least Squares

The Statistical Property of Ordinary Least Squares The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting

More information

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance: 8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Chapter 1. Linear Regression with One Predictor Variable

Chapter 1. Linear Regression with One Predictor Variable Chapter 1. Linear Regression with One Predictor Variable 1.1 Statistical Relation Between Two Variables To motivate statistical relationships, let us consider a mathematical relation between two mathematical

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)

More information

ECE 275B Homework # 1 Solutions Winter 2018

ECE 275B Homework # 1 Solutions Winter 2018 ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27 Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix

More information

STA 2101/442 Assignment 3 1

STA 2101/442 Assignment 3 1 STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance

More information