Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Size: px
Start display at page:

Download "Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants"

Transcription

1 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37

2 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants Average running time was minutes. Were runners faster in 2012? To answer this question, select n runners from the 2012 race at random and denote by X 1,...,X n their running time. 2/37

3 Cherry Blossom run (2) We can see from past data that the running time has Gaussian distribution. The variance was /37

4 Cherry Blossom run (3) We are given i.i.d r.v X 1,...,X n and we want to know if X 1 N(103.5, 373) This is a hypothesis testing problem. There are many ways this could be false: 1. IE[X 1 ] = var[x 1 ] = X 1 may not even be Gaussian. We are interested in a very specific question: is IE[X 1 ] < 103.5? 4/37

5 Cherry Blossom run (4) We make the following assumptions: 1. var[x 1 ] = 373 (variance is the same between 2009 and 2012) 2. X 1 is Gaussian. The only thing that we did not fix is IE[X 1 ] = µ. Now we want to test (only): Is µ = or is µ < 103.5? By making modeling assumptions, we have reduced the number of ways the hypothesis X 1 N(103.5, 373) may be rejected. The only way it can be rejected is if X 1 N(µ, 373) for some µ < We compare an expected value to a fixed reference number (103.5). 5/37

6 Cherry Blossom run (5) Simple heuristic: If X n < 103.5, then µ < This could go wrong if I randomly pick only fast runners in my sample X 1,...,X n. Better heuristic: If X n < (something that 0), then µ < n To make this intuition more precise, we need to take the size of the random fluctuations of X n into account! 6/37

7 Clinical trials (1) Pharmaceutical companies use hypothesis testing to test if a new drug is efficient. To do so, they administer a drug to a group of patients (test group) and a placebo to another group (control group). Assume that the drug is a cough syrup. Let µ control denote the expected number of expectorations per hour after a patient has used the placebo. Let µ drug denote the expected number of expectorations per hour after a patient has used the syrup. We want to know if µ drug < µ control We compare two expected values. No reference number. 7/37

8 Clinical trials (2) Let X 1,...,X ndrug denote n drug i.i.d r.v. with distribution Poiss(µ drug ) Let Y 1,...,Y ncontrol denote n control i.i.d r.v. with distribution Poiss(µ control ) We want to test if µ drug < µ control. Heuristic: If X drug < X control (something that 0), then conclude that µ drug < µ control n drug n control 8/37

9 Heuristics (1) Example 1: A coin is tossed 80 times, and Heads are obtained 54 times. Can we conclude that the coin is significantly unfair? iid n = 80, X 1,...,X n Ber(p); X n = 54/80 =.68 If it was true that p =.5: By CLT+Slutsky s theorem, X n.5 n N(0, 1). J.5(1.5) X n.5 n 3.22 J.5(1.5) Conclusion: It seems quite reasonable to reject the hypothesis p =.5. 9/37

10 Heuristics (2) Example 2: A coin is tossed 30 times, and Heads are obtained 13 times. Can we conclude that the coin is significantly unfair? iid n = 30,X 1,...,X n Ber(p); X n = 13/30.43 If it was true that p =.5: By CLT+Slutsky s theorem, X n.5 n N(0, 1). J.5(1.5) X n.5 Our data gives n J.77.5(1.5) The number.77 is a plausible realization of a random variable Z N(0, 1). Conclusion: our data does not suggest that the coin is unfair. 10/37

11 Statistical formulation (1) Consider a sample X 1,...,X n of i.i.d. random variables and a statistical model (E, (IP θ ) θ Θ ). Let Θ 0 and Θ 1 be disjoint subsets of Θ. Consider the two hypotheses: { H 0 : θ Θ 0 H 1 : θ Θ 1 H 0 is the null hypothesis, H 1 is the alternative hypothesis. If we believe that the true θ is either in Θ 0 or in Θ 1, we may want to test H 0 against H 1. We want to decide whether to reject H 0 (look for evidence against H 0 in the data). 11/37

12 Statistical formulation (2) H 0 and H 1 do not play a symmetric role: the data is is only used to try to disprove H 0 In particular lack of evidence, does not mean that H 0 is true ( innocent until proven guilty ) A test is a statistic ψ {0, 1} such that: If ψ = 0, H 0 is not rejected; If ψ = 1, H 0 is rejected. Coin example: H 0 : p = 1/2 vs. H 1 : p = 1/2. { X n.5 ψ = 1I n J.5(1.5) } > C, for some C > 0. How to choose the threshold C? 12/37

13 Statistical formulation (3) Rejection region of a test ψ: R ψ = {x E n : ψ(x) = 1}. Type 1 error of a test ψ (rejecting H 0 when it is actually true): α ψ : Θ 0 IR θ IP θ [ψ = 1]. Type 2 error of a test ψ (not rejecting H 0 although H 1 is actually true): β ψ : Θ 1 IR θ IP θ [ψ = 0]. Power of a test ψ: π ψ = inf (1 β ψ (θ)). θ Θ 1 13/37

14 Statistical formulation (4) A test ψ has level α if α ψ (θ) α, θ Θ 0. A test ψ has asymptotic level α if In general, a test has the form lim α ψ (θ) α, θ Θ 0. n ψ = 1I{T n > c}, for some statistic T n and threshold c IR. T n is called the test statistic. The rejection region is R ψ = {T n > c}. 14/37

15 Example (1) iid Let X 1,...,X n Ber(p), for some unknown p (0, 1). We want to test: H 0 : p = 1/2 vs. H 1 : p = 1/2 with asymptotic level α (0, 1). pˆn 0.5 Let T n = n J.5(1.5), where pˆn is the MLE. If H 0 is true, then by CLT and Slutsky s theorem, Let ψ α = 1I{T n > q α/2 }. IP[T n > q α/2 ] 0.05 n 15/37

16 Example (2) Coming back to the two previous coin examples: For α = 5%, q α/2 = 1.96, so: In Example 1, H 0 is rejected at the asymptotic level 5% by the test ψ 5% ; In Example 2, H 0 is not rejected at the asymptotic level 5% by the test ψ 5%. Question: In Example 1, for what level α would ψ α not reject H 0? And in Example 2, at which level α would ψ α reject H 0? 16/37

17 p-value Definition The (asymptotic) p-value of a test ψ α is the smallest (asymptotic) level α at which ψ α rejects H 0. It is random, it depends on the sample. Golden rule p-value α H 0 is rejected by ψ α, at the (asymptotic) level α. The smaller the p-value, the more confidently one can reject H 0. Example 1: p-value = IP[ Z > 3.21].01. Example 2: p-value = IP[ Z >.77] /37

18 Neyman-Pearson s paradigm Idea: For given hypotheses, among all tests of level/asymptotic level α, is it possible to find one that has maximal power? Example: The trivial test ψ = 0 that never rejects H 0 has a perfect level (α = 0) but poor power (π ψ = 0). Neyman-Pearson s theory provides (the most) powerful tests with given level. In , we only study several cases. 18/37

19 The χ 2 distributions Definition For a positive integer d, the χ 2 (pronounced Kai-squared ) distribution with d degrees of freedom is the law of the random iid variable Z2 1 + Z Zd 2, where Z 1,...,Z d N(0, 1). Examples: If Z N d (0,I d ), then IZI 2 2 χ 2 d. Recall that the sample variance is given by S n = 1 n (Xi X n) 2 = 1 n 2 Xi n n i=1 i=1 (X n) 2 Cochran s theorem implies that for X 1,...,X n N(µ, σ 2 ), if S n is the sample variance, then iid χ 2 2 = Exp(1/2). ns n χ 2 n 1. σ 2 19/37

20 Student s T distributions Definition For a positive integer d, the Student s T distribution with d degrees of freedom (denoted by t d ) is the law of the random Z variable J V/d independent of V ). Example:, where Z N(0, 1), V χ 2 and Z V (Z is d iid Cochran s theorem implies that for X 1,...,X n N(µ, σ 2 ), if S n is the sample variance, then X n µ n 1 t n 1. S n 20/37

21 Wald s test (1) Consider an i.i.d. sample X 1,...,X n with statistical model (E, (IP θ ) θ Θ ), where Θ IR d (d 1) and let θ 0 Θ be fixed and given. Consider the following hypotheses: { H 0 : θ = θ 0 H 1 : θ = θ 0. Let θˆ MLE be the MLE. Assume the MLE technical conditions are satisfied. If H 0 is true, then ) (d) n I(θˆMLE ) 1/2 (θˆmle θ 0 N d (0,I d ) w.r.t. IP θ0. n n 21/37

22 Wald s test (2) Hence, ( ) ( ) θˆmle θ MLE (d) n n θ 0 I(ˆ ) θˆmle n θ 0 χd 2 w.r.t. IP θ0. n }{{} Tn Wald s test with asymptotic level α (0, 1): ψ = 1I{T n > q α }, where q α is the (1 α)-quantile of χ 2 (see tables). d Remark: Wald s test is also valid if H 1 has the form θ > θ 0 or θ < θ 0 or θ = θ /37

23 Likelihood ratio test (1) Consider an i.i.d. sample X 1,...,X n with statistical model (E, (IP θ ) θ Θ ), where Θ IR d (d 1). Suppose the null hypothesis has the form (0) (0) H 0 : (θ r+1,...,θ d ) = (θ r+1,...,θ d ), (0) (0) for some fixed and given numbers θ r+1,...,θ d. Let and ˆθ n = argmax l n (θ) (MLE) θ Θ θˆc = argmax l n (θ) ( constrained MLE ) n θ Θ 0 23/37

24 Likelihood ratio test (2) Test statistic: ( ) T n = 2 l n (θˆn) l n (θˆc n). Theorem Assume H 0 is true and the MLE technical conditions are satisfied. Then, (d) T n χ d 2 r w.r.t. IP θ. n Likelihood ratio test with asymptotic level α (0, 1): ψ = 1I{T n > q α }, where q α is the (1 α)-quantile of χ 2 d r (see tables). 24/37

25 Testing implicit hypotheses (1) Let X 1,...,X n be i.i.d. random variables and let θ IR d be a parameter associated with the distribution of X 1 (e.g. a moment, the parameter of a statistical model, etc...) Let g : IR d IR k be continuously differentiable (with k < d). Consider the following hypotheses: { H 0 : g(θ) = 0 H 1 : g(θ) = 0. E.g. g(θ) = (θ 1,θ 2 ) (k = 2), or g(θ) = θ 1 θ 2 (k = 1), or... 25/37

26 Testing implicit hypotheses (2) Suppose an asymptotically normal estimator θˆn is available: ( ) (d) n θˆ n θ N d (0, Σ(θ)). n Delta method: ( ) (d) n g(θˆn) g(θ) N k (0, Γ(θ)), n where Γ(θ) = g(θ) Σ(θ) g(θ) IR k k. Assume Σ(θ) is invertible and g(θ) has rank k. So, Γ(θ) is invertible and ( ) (d) n Γ(θ) 1/2 g(θˆn) g(θ) N k (0,I k ). n 26/37

27 Testing implicit hypotheses (3) Then, by Slutsky s theorem, if Γ(θ) is continuous in θ, ( ) (d) n Γ(θˆn ) 1/2 g(θˆn) g(θ) N k (0,I k ). Hence, if H 0 is true, i.e., g(θ) = 0, n ) Γ 1 (d) ng(θˆn (ˆ θ )g(ˆ χ 2 n θ n ) k }{{}. n T n Test with asymptotic level α: ψ = 1I{T n > q α }, where q α is the (1 α)-quantile of χ 2 (see tables). k 27/37

28 The multinomial case: χ 2 test (1) Let E = {a 1,...,a K } be a finite space and (IP p ) family of all probability distributions on E: p Δ K be the n K Δ = p = (p 1,...,p K ) (0, 1) K K : p j = 1. j=1 For p Δ K and X IP p, IP p [X = a j ] = p j, j = 1,..., K. 28/37

29 The multinomial case: χ 2 test (2) iid Let X 1,...,X n IP p, for some unknown p Δ K, and let p 0 Δ K be fixed. We want to test: H 0 : p = p 0 vs. H 1 : p = p 0 with asymptotic level α (0, 1). Example: If p 0 = (1/K, 1/K,..., 1/K), we are testing whether IP p is the uniform distribution on E. 29/37

30 The multinomial case: χ 2 test (3) Likelihood of the model: N 1 N 2 N K L n (X 1,...,X n, p) = p p...p, 1 2 K where N j = #{i = 1,...,n : X i = a j }. Let pˆ be the MLE: N j pˆj =, j = 1,..., K. n pˆ maximizes logl n (X 1,...,X n, p) under the constraint nk pj = 1. j=1 30/37

31 The multinomial case: χ 2 test (4) If H 0 is true, then n(pˆ p 0 ) is asymptotically normal, and the following holds. Theorem ( ) 2 K 0 pˆj p j (d) n n χ 2 j=1 p j 0 }{{} T n n K 1. χ 2 test with asymptotic level α: ψ α = 1I{T n > q α }, where q α is the (1 α)-quantile of χ 2 K 1. Asymptotic p-value of this test: p value = IP[Z > T n T n ], where Z χ 2 and Z T n. K 1 31/37

32 The Gaussian case: Student s test (1) iid Let X 1,...,X n N(µ, σ 2 ), for some unknown µ IR,σ 2 > 0 and let µ 0 IR be fixed, given. We want to test: H 0 : µ = µ 0 vs. H 1 : µ = µ 0 with asymptotic level α (0, 1). X n µ 0 If σ 2 is known: Let T n = n. Then, T n N(0, 1) σ and ψ α = 1I{ T n > q α/2 } is a test with (non asymptotic) level α. 32/37

33 The Gaussian case: Student s test (2) If σ 2 is unknown: Xn µ 0 Let T n = n 1, where S n is the sample variance. S n Cochran s theorem: X n S n ; ns n χ 2 σ 2 n 1. Hence, T T n t n 1 : Student s distribution with n 1 degrees of freedom. 33/37

34 The Gaussian case: Student s test (3) Student s test with (non asymptotic) level α (0, 1): ψ α = 1I{ T T n > q α/2 }, where q α/2 is the (1 α/2)-quantile of t n 1. If H 1 is µ > µ 0, Student s test with level α (0, 1) is: ψ = 1I{T T n > q α }, α where q α is the (1 α)-quantile of t n 1. Advantage of Student s test: Non asymptotic Can be run on small samples Drawback of Student s test: It relies on the assumption that the sample is Gaussian. 34/37

35 Two-sample test: large sample case (1) Consider two samples: X 1,...,X n and Y 1,...,Y m, of independent random variables such that IE[X 1 ] = = IE[X n ] = µ X, and IE[Y 1 ] = = IE[Y m ] = µ Y Assume that the variances of are known so assume (without loss of generality) that var(x 1 ) = = var(x n ) = var(y 1 ) = = var(y m ) = 1 We want to test: H 0 : µ X = µ Y vs. H 1 : µ X = µ Y with asymptotic level α (0, 1). 35/37

36 Two-sample test: large sample case (2) From CLT: (d) n(x n µ X ) N(0, 1) n and (d) (d) m(ȳ m µ Y ) N(0, 1) n(ȳ m µ Y ) N(0,γ) n m m m γ n Moreover, the two samples are independent so (d) n(x n Y m )+ n(µ X µ Y ) N(0, 1+γ) n m m γ n Under H 0 : µ X = µ Y : X n Ȳ m (d) n J N(0, 1) 1+m/n n m m γ n { X n Y } m Test: ψ α = 1I n J > q α/2 1+m/n 36/37

37 Two-sample T-test If the variances are unknown but we know that X i N(µ X,σX 2 ), Y i N(µ Y,σY 2 ). Then σx 2 σy 2 X n Y m N ( ) µ X µ Y, + n m Under H 0 : X n Y J m N(0, 1) σx 2 /n + σy 2 /m For unknown variance: X n Y J m t N SX 2 /n + SY 2 /m where ( S 2 /n + S 2 /m ) 2 N = X Y SX 4 SY + 4 n 2 (n 1) m 2 (m 1) 37/37

38 MIT OpenCourseWare / Statistics for Applications Fall 2016 For information about citing these materials or our Terms of Use, visit:

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Introductory Econometrics

Introductory Econometrics Session 4 - Testing hypotheses Roland Sciences Po July 2011 Motivation After estimation, delivering information involves testing hypotheses Did this drug had any effect on the survival rate? Is this drug

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

1 Hypothesis testing for a single mean

1 Hypothesis testing for a single mean This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Statistics for Applications. Chapter 1: Introduction 1/43

Statistics for Applications. Chapter 1: Introduction 1/43 18.650 Statistics for Applications Chapter 1: Introduction 1/43 Goals Goals: To give you a solid introduction to the mathematical theory behind statistical methods; To provide theoretical guarantees for

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

Detection theory. H 0 : x[n] = w[n]

Detection theory. H 0 : x[n] = w[n] Detection Theory Detection theory A the last topic of the course, we will briefly consider detection theory. The methods are based on estimation theory and attempt to answer questions such as Is a signal

More information

Chapter 10. Hypothesis Testing (I)

Chapter 10. Hypothesis Testing (I) Chapter 10. Hypothesis Testing (I) Hypothesis Testing, together with statistical estimation, are the two most frequently used statistical inference methods. It addresses a different type of practical problems

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline

More information

EC2001 Econometrics 1 Dr. Jose Olmo Room D309

EC2001 Econometrics 1 Dr. Jose Olmo Room D309 EC2001 Econometrics 1 Dr. Jose Olmo Room D309 J.Olmo@City.ac.uk 1 Revision of Statistical Inference 1.1 Sample, observations, population A sample is a number of observations drawn from a population. Population:

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

INTERVAL ESTIMATION AND HYPOTHESES TESTING

INTERVAL ESTIMATION AND HYPOTHESES TESTING INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,

More information

1 Statistical inference for a population mean

1 Statistical inference for a population mean 1 Statistical inference for a population mean 1. Inference for a large sample, known variance Suppose X 1,..., X n represents a large random sample of data from a population with unknown mean µ and known

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Lecture 21: October 19

Lecture 21: October 19 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

Topic 15: Simple Hypotheses

Topic 15: Simple Hypotheses Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is

More information

Composite Hypotheses and Generalized Likelihood Ratio Tests

Composite Hypotheses and Generalized Likelihood Ratio Tests Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve

More information

Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing

Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing Eric Zivot October 12, 2011 Hypothesis Testing 1. Specify hypothesis to be tested H 0 : null hypothesis versus. H 1 : alternative

More information

F79SM STATISTICAL METHODS

F79SM STATISTICAL METHODS F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is

More information

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1 4 Hypothesis testing 4. Simple hypotheses A computer tries to distinguish between two sources of signals. Both sources emit independent signals with normally distributed intensity, the signals of the first

More information

Introductory Econometrics. Review of statistics (Part II: Inference)

Introductory Econometrics. Review of statistics (Part II: Inference) Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Null Hypothesis Significance Testing p-values, significance level, power, t-tests

Null Hypothesis Significance Testing p-values, significance level, power, t-tests Null Hypothesis Significance Testing p-values, significance level, power, t-tests 18.05 Spring 2014 January 1, 2017 1 /22 Understand this figure f(x H 0 ) x reject H 0 don t reject H 0 reject H 0 x = test

More information

Confidence Intervals for Normal Data Spring 2014

Confidence Intervals for Normal Data Spring 2014 Confidence Intervals for Normal Data 18.05 Spring 2014 Agenda Today Review of critical values and quantiles. Computing z, t, χ 2 confidence intervals for normal data. Conceptual view of confidence intervals.

More information

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1)

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1) Summary of Chapter 7 (Sections 7.2-7.5) and Chapter 8 (Section 8.1) Chapter 7. Tests of Statistical Hypotheses 7.2. Tests about One Mean (1) Test about One Mean Case 1: σ is known. Assume that X N(µ, σ

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Confidence Intervals, Testing and ANOVA Summary

Confidence Intervals, Testing and ANOVA Summary Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0

More information

Class 26: review for final exam 18.05, Spring 2014

Class 26: review for final exam 18.05, Spring 2014 Probability Class 26: review for final eam 8.05, Spring 204 Counting Sets Inclusion-eclusion principle Rule of product (multiplication rule) Permutation and combinations Basics Outcome, sample space, event

More information

8: Hypothesis Testing

8: Hypothesis Testing Some definitions 8: Hypothesis Testing. Simple, compound, null and alternative hypotheses In test theory one distinguishes between simple hypotheses and compound hypotheses. A simple hypothesis Examples:

More information

STAT 461/561- Assignments, Year 2015

STAT 461/561- Assignments, Year 2015 STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Asymptotic Tests and Likelihood Ratio Tests

Asymptotic Tests and Likelihood Ratio Tests Asymptotic Tests and Likelihood Ratio Tests Dennis D. Cox Department of Statistics Rice University P. O. Box 1892 Houston, Texas 77251 Email: dcox@stat.rice.edu November 21, 2004 0 1 Chapter 6, Section

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

Basic Concepts of Inference

Basic Concepts of Inference Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Exam 2 Practice Questions, 18.05, Spring 2014

Exam 2 Practice Questions, 18.05, Spring 2014 Exam 2 Practice Questions, 18.05, Spring 2014 Note: This is a set of practice problems for exam 2. The actual exam will be much shorter. Within each section we ve arranged the problems roughly in order

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

2014/2015 Smester II ST5224 Final Exam Solution

2014/2015 Smester II ST5224 Final Exam Solution 014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Lecture 10: Generalized likelihood ratio test

Lecture 10: Generalized likelihood ratio test Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual

More information

STAT 4385 Topic 01: Introduction & Review

STAT 4385 Topic 01: Introduction & Review STAT 4385 Topic 01: Introduction & Review Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2016 Outline Welcome What is Regression Analysis? Basics

More information

Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE

Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Eric Zivot Winter 013 1 Wald, LR and LM statistics based on generalized method of moments estimation Let 1 be an iid sample

More information

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses. 1 Review: Let X 1, X,..., X n denote n independent random variables sampled from some distribution might not be normal!) with mean µ) and standard deviation σ). Then X µ σ n In other words, X is approximately

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

Bias Variance Trade-off

Bias Variance Trade-off Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]

More information

Comparing two independent samples

Comparing two independent samples In many applications it is necessary to compare two competing methods (for example, to compare treatment effects of a standard drug and an experimental drug). To compare two methods from statistical point

More information

Maximum Likelihood Large Sample Theory

Maximum Likelihood Large Sample Theory Maximum Likelihood Large Sample Theory MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline 1 Large Sample Theory of Maximum Likelihood Estimates 2 Asymptotic Results: Overview Asymptotic Framework Data Model

More information

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October

More information

Statistical Hypothesis Testing

Statistical Hypothesis Testing Statistical Hypothesis Testing Dr. Phillip YAM 2012/2013 Spring Semester Reference: Chapter 7 of Tests of Statistical Hypotheses by Hogg and Tanis. Section 7.1 Tests about Proportions A statistical hypothesis

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

ST495: Survival Analysis: Hypothesis testing and confidence intervals

ST495: Survival Analysis: Hypothesis testing and confidence intervals ST495: Survival Analysis: Hypothesis testing and confidence intervals Eric B. Laber Department of Statistics, North Carolina State University April 3, 2014 I remember that one fateful day when Coach took

More information

Summary: the confidence interval for the mean (σ 2 known) with gaussian assumption

Summary: the confidence interval for the mean (σ 2 known) with gaussian assumption Summary: the confidence interval for the mean (σ known) with gaussian assumption on X Let X be a Gaussian r.v. with mean µ and variance σ. If X 1, X,..., X n is a random sample drawn from X then the confidence

More information

Topic 17: Simple Hypotheses

Topic 17: Simple Hypotheses Topic 17: November, 2011 1 Overview and Terminology Statistical hypothesis testing is designed to address the question: Do the data provide sufficient evidence to conclude that we must depart from our

More information

δ -method and M-estimation

δ -method and M-estimation Econ 2110, fall 2016, Part IVb Asymptotic Theory: δ -method and M-estimation Maximilian Kasy Department of Economics, Harvard University 1 / 40 Example Suppose we estimate the average effect of class size

More information

Review and continuation from last week Properties of MLEs

Review and continuation from last week Properties of MLEs Review and continuation from last week Properties of MLEs As we have mentioned, MLEs have a nice intuitive property, and as we have seen, they have a certain equivariance property. We will see later that

More information

2.6.3 Generalized likelihood ratio tests

2.6.3 Generalized likelihood ratio tests 26 HYPOTHESIS TESTING 113 263 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : θ Θ against H 1 : θ Θ\Θ It can be used

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment more effective than the old one? 3. Quality

More information

Hypothesis Testing One Sample Tests

Hypothesis Testing One Sample Tests STATISTICS Lecture no. 13 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 12. 1. 2010 Tests on Mean of a Normal distribution Tests on Variance of a Normal

More information

ECE 275A Homework 7 Solutions

ECE 275A Homework 7 Solutions ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics A short review of the principles of mathematical statistics (or, what you should have learned in EC 151).

More information

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X). 4. Interval estimation The goal for interval estimation is to specify the accurary of an estimate. A 1 α confidence set for a parameter θ is a set C(X) in the parameter space Θ, depending only on X, such

More information

M(t) = 1 t. (1 t), 6 M (0) = 20 P (95. X i 110) i=1

M(t) = 1 t. (1 t), 6 M (0) = 20 P (95. X i 110) i=1 Math 66/566 - Midterm Solutions NOTE: These solutions are for both the 66 and 566 exam. The problems are the same until questions and 5. 1. The moment generating function of a random variable X is M(t)

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

8. Hypothesis Testing

8. Hypothesis Testing FE661 - Statistical Methods for Financial Engineering 8. Hypothesis Testing Jitkomut Songsiri introduction Wald test likelihood-based tests significance test for linear regression 8-1 Introduction elements

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Hypothesis testing. Anna Wegloop Niels Landwehr/Tobias Scheffer

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Hypothesis testing. Anna Wegloop Niels Landwehr/Tobias Scheffer Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Hypothesis testing Anna Wegloop iels Landwehr/Tobias Scheffer Why do a statistical test? input computer model output Outlook ull-hypothesis

More information

ML Testing (Likelihood Ratio Testing) for non-gaussian models

ML Testing (Likelihood Ratio Testing) for non-gaussian models ML Testing (Likelihood Ratio Testing) for non-gaussian models Surya Tokdar ML test in a slightly different form Model X f (x θ), θ Θ. Hypothesist H 0 : θ Θ 0 Good set: B c (x) = {θ : l x (θ) max θ Θ l

More information

MTH U481 : SPRING 2009: PRACTICE PROBLEMS FOR FINAL

MTH U481 : SPRING 2009: PRACTICE PROBLEMS FOR FINAL MTH U481 : SPRING 2009: PRACTICE PROBLEMS FOR FINAL 1). Two urns are provided as follows: urn 1 contains 2 white chips and 4 red chips, while urn 2 contains 5 white chips and 3 red chips. One chip is chosen

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Classical Inference for Gaussian Linear Models

Classical Inference for Gaussian Linear Models Classical Inference for Gaussian Linear Models Surya Tokdar Gaussian linear model Data (z i, Y i ), i = 1,, n, dim(z i ) = p Model Y i = z T i Parameters: β R p, σ 2 > 0 Inference needed on η = a T β IID

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 2.830J / 6.780J / ESD.63J Control of Processes (SMA 6303) Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Detection theory 101 ELEC-E5410 Signal Processing for Communications

Detection theory 101 ELEC-E5410 Signal Processing for Communications Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Introduction 1. STA442/2101 Fall See last slide for copyright information. 1 / 33

Introduction 1. STA442/2101 Fall See last slide for copyright information. 1 / 33 Introduction 1 STA442/2101 Fall 2016 1 See last slide for copyright information. 1 / 33 Background Reading Optional Chapter 1 of Linear models with R Chapter 1 of Davison s Statistical models: Data, and

More information

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu

More information

The Multinomial Model

The Multinomial Model The Multinomial Model STA 312: Fall 2012 Contents 1 Multinomial Coefficients 1 2 Multinomial Distribution 2 3 Estimation 4 4 Hypothesis tests 8 5 Power 17 1 Multinomial Coefficients Multinomial coefficient

More information

Lecture 8. October 22, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University.

Lecture 8. October 22, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University. Lecture 8 Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University October 22, 2007 1 2 3 4 5 6 1 Define convergent series 2 Define the Law of Large Numbers

More information

STAT 135 Lab 7 Distributions derived from the normal distribution, and comparing independent samples.

STAT 135 Lab 7 Distributions derived from the normal distribution, and comparing independent samples. STAT 135 Lab 7 Distributions derived from the normal distribution, and comparing independent samples. Rebecca Barter March 16, 2015 The χ 2 distribution The χ 2 distribution We have seen several instances

More information