Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Size: px
Start display at page:

Download "Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata"

Transcription

1 Maura Department of Economics and Finance Università Tor Vergata

2 Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet

3 Outline 1 The Power Function Methods of evaluating tests 2 Hypothesis Testing: N(µ, σ 2 ) Method of inversion 3 Wald Test Score Test Application to Pareto Distribution 4 Application to Poisson Distribution

4 s Outline The Power Function Methods of evaluating tests Hypothesis testing is an inferential statistics procedure used to offer proof of a hypothesis (guess) A test of a hypothesis is a process that uses sample statistics to test a claim about the value of a population parameter

5 What s a Hypothesis? The Power Function Methods of evaluating tests A hypothesis is a statement about a population parameter (for example, Population mean, Population proportion, or Population variance) Two complementary hypothesis in a hypothesis testing problem are called the null and the alternative hypothesis. The claim about the parameter need to be translated from a verbal statement to a mathematical statement then write its complement.

6 What s a Hypothesis? The Power Function Methods of evaluating tests Null Hypothesis indicated with H 0, is a hypothesis that is held to be true unless sufficient evidence to the contrary is obtained Alternative Hypothesis indicated with H 1, is a new hypothesis to be tested. A hypothesis against which the null hypothesis is tested and which will be held to be true if the null is held false. Simple hypothesis determines completely distribution of X, specifies only one possible distribution for the sample. Composite hypothesis specify more that one possible distributions for the sample.

7 What s a Hypothesis Test? The Power Function Methods of evaluating tests Definition A hypothesis procedure or hypothesis test is a rule that specifies: For which sample values the decision is made to accept H 0 as true For which sample values H 0 is rejected and H 1 is accepted as true

8 Rejection Region Outline The Power Function Methods of evaluating tests The subset of the sample space for which H 0 will be rejected is called the rejection region or critical region. The complement of the rejection region is called the acceptance region. The rejection region R of a hypothesis test is usually defined by a test statistic W (X ), a function of the sample R = {X : W (X ) > c} reject H 0 A = {X : W (X ) c} accept H 0

9 Types of Errors Outline The Power Function Methods of evaluating tests Since the decision is based on a sample and not on the entire population, there is always the possibility you ll make the wrong decision. There are two types of errors. A type I error occurs if H 0 is rejected when it is actually true. A type II error occurs if H 0 is not rejected when it is actually false.

10 How can we go wrong? The Power Function Methods of evaluating tests H 0 is true H 0 is false Accept H 0 No error Type II error Reject H 0 Type I error No error Type I error is considered MORE SERIOUS, we do our best to not COMMIT Type I error

11 Errors in making Decisions The Power Function Methods of evaluating tests Type I Error: Reject true null hypothesis Has serious consequences Probability of Type I Error is α Called Level of Significance. Type II Error Do not reject false null hypothesis Probability of Type II Error is β.

12 Thought Questions Outline The Power Function Methods of evaluating tests In the courtroom, juries must make a decision about the guilt or innocence of a defendant. Suppose you are on the jury in a murder trial. It is obviously a mistake if the jury claims the suspect is guilty when in fact he or she is innocent. What is the other type of mistake the jury could make? Which is more serious?

13 Decision Results Outline The Power Function Methods of evaluating tests

14 Example N(µ, σ = 10), n=100 The Power Function Methods of evaluating tests H 0 : µ = 0 vs H 1 : µ = 2 R = {x : x > 0.8}

15 Example N(µ, σ = 10), n=100 The Power Function Methods of evaluating tests

16 Example N(µ, σ = 10), n=100, α The Power Function Methods of evaluating tests

17 Example N(µ, σ = 10), n=100, β The Power Function Methods of evaluating tests

18 Example N(µ, σ = 100) The Power Function Methods of evaluating tests

19 The Power Function Outline The Power Function Methods of evaluating tests Definition: The power function of a hypothesis test with rejection region R is the function of θ defined by: β(θ) = P θ (X R) Example: Let X 1,..., X 10 be a random sample from a population N(µ, σ 2 = 4) and let H 0 : µ 0 and H 1 : µ > 0 and let consider a test that reject H 0 when X > 1, the power function has the following expression: β(µ) = P( X > 1 µ = µ) = P ( Z > 1 µ 2 10 )

20 The Power Function Outline The Power Function Methods of evaluating tests

21 α and β Outline The Power Function Methods of evaluating tests Definition For 0 α 1, a test with power function β(θ) is a size α test if sup θ Θ0 β(θ) = α Definition For 0 α 1, a test with power function β(θ) is a level α test if sup θ Θ0 β(θ) α

22 Example: Computation of α and β The Power Function Methods of evaluating tests Let X 1,..., X 3 be a random sample from a population N(µ, σ 2 = 4) and let H 0 : µ = 0 and H 1 : µ = 1 and let consider a test with the following rejection region: Calculate α and β R = {(x 1, x 2, x 3 ) : X 1 + X 2 X 3 > 0.8}

23 α and β Outline The Power Function Methods of evaluating tests α = P(X 1 + X 2 X 3 > 0.8 µ = 0) and β = P(X 1 + X 2 X µ = 1) X 1 + X 2 X 3 N(µ, 3 σ 2 ) Let Y = X 1 + X 2 X 3 Under null hypothesis: Y N(0, 12) Under alternative hypothesis: Y N(1, 12)

24 Example Outline The Power Function Methods of evaluating tests

25 α and β Outline The Power Function Methods of evaluating tests X 1 + X 2 X 3 N(µ, 3 σ 2 ) Let Y = X 1 + X 2 X 3 Under null hypothesis: Y N(0, 12) Under alternative hypothesis: Y N(1, 12) ( ) α = P(Y > 0.8 µ = 0) = P Y 0 12 > = P (Z > 0.23) = 0.41 ( ) β = P(Y 0.8 µ = 1) = P Y 1 12 > = P (Z 0.058) = 0.477

26 Example: Computation of α and β The Power Function Methods of evaluating tests Let X 1,..., X 3 be a random sample from a population N(µ, σ 2 = 4) and let H 0 : µ = 0 and H 1 : µ = 1 and let consider a test with the following rejection region: R = Calculate α and β { (x 1, x 2, x 3 ) : X 1 + X 2 + X 3 3 } > 0.8

27 α and β Outline The Power Function Methods of evaluating tests ( ) α = P X1 +X 2 +X 3 3 > 0.8 µ = 0 and ( ) β = P X1 +X 2 +X µ = 1 X 1 +X 2 +X 3 3 N(µ, σ2 3 ) Let X 1+X 2 +X 3 3 Under null hypothesis: Y N ( 0, 4 ) 3 Under alternative hypothesis: Y N ( 1, 4 3 )

28 Example Outline The Power Function Methods of evaluating tests

29 α and β Outline The Power Function Methods of evaluating tests ( ) Y 0 α = P(Y > 0.8 µ = 0) = P > = /3 4/3 ( ) Y 1 β = P(Y 0.8 µ = 1) = P > = /3 4/3

30 Most Powerful Tests Outline The Power Function Methods of evaluating tests Definition: The region C is a uniformly most powerful critical region of size α for testing the simple hypothesis H 0 against a composite alternative hypothesis H 1 if C is a best critical region of size α for testing H 0 against each simple hypothesis in H 1. The resulting test is said to be uniformly most powerful.

31 Most Powerful Tests Outline The Power Function Methods of evaluating tests Definition: Let C be a class of tests for testing H 0 : θ = θ 0 versus H 1 : θ = θ 1 of size α. A test in class C, with power function β(θ), is a uniformly most powerful (UMP) class C test if β(θ) β (θ) for every θ Θ c 0 and for every β (θ) that is a power function of a test in class C.

32 Hypothesis Testing: N(µ, σ 2 ) Method of inversion Theorem: Consider testing H 0 : θ = θ 0 versus H 1 : θ = θ 1 where the pdf or pmf corresponding to θ i is f (x θ i ), i = 0, 1 using a test with rejection region R that satisfies x R if f (x θ 1 ) > kf (x θ 0 ) and and x R C if f (x θ 1 ) kf (x θ 0 ) α = P θ0 (X R) Any test that satisfies previous conditions is a UMP level α test.

33 Corollary of Neyman-Pearson Hypothesis Testing: N(µ, σ 2 ) Method of inversion Corollary: Consider testing H 0 : θ = θ 0 versus H 1 : θ = θ 1. Suppose T (X) is a sufficient statistics for θ and g(x θ i ), i = 0, 1 is the pdf or pmf corresponding to θ i. Then any test statistics based on T with rejection region S is a UMP level α test if it satisfies t S if g(t θ 1 ) > kg(t θ 0 ) and and t S C if g(t θ 1 ) kg(t θ 0 ) α = P θ0 (T S)

34 Application Hypothesis Testing: N(µ, σ 2 ) Method of inversion Suppose X has an exponential distribution with arrival rate λ. We wish to test the hypothesis that λ = 1 against the alternative that λ = 2. Suppose X has a Gaussian distribution with mean µ and known variance. We wish to test the hypothesis that µ = µ 0 against the alternative that µ = µ 1.

35 LRT and UMP test Outline Hypothesis Testing: N(µ, σ 2 ) Method of inversion Theorem (UMP tests) If the LR test of size α for H 0 : θ = θ 0 versus the simple alternative H 0 : θ = θ 1 is the same test for all θ 1 Θ 1, (for example Θ 1 = (θ 1, ) then this is the UMP test of size α for H 0 : θ = θ 0 against the compound alternative H 0 : θ Θ 1. Otherwise no UMP test of these hypotheses of size α exists. Remark So whenever an UMP test exists for a simple H 0 it is a LR test.

36 Generalized s Hypothesis Testing: N(µ, σ 2 ) Method of inversion We have been looking at really simple problems, typically with just one parameter, but in the real world of applied statistics we constantly deal with models having many parameters. In that context we never have a simple null hypothesis. Our null hypotheses often assert that one of the parameters takes a specific value, but such a hypothesis is not simple, since that would entail specifying the values of all the parameters.

37 Outline Hypothesis Testing: N(µ, σ 2 ) Method of inversion Definition: The likelihood ratio test statistics for testing H 0 : θ Θ 0 versus H 1 : θ Θ C 0 is λ(x) = sup Θ 0 L(θ x) sup Θ L(θ x), Θ = Θ 0 Θ C 0 Definition: The likelihood ratio test (LRT) is any test that has a rejection region of the form : {x : λ(x) c} where c is any number satisfying 0 c 1. This should be reduced to the simplest possible form.

38 s Hypothesis Testing: N(µ, σ 2 ) Method of inversion The rationale behind LRTs may best be understood in the situation in which f (x θ) is the probability distribution of a discrete random variable. In this case, the numerator of λ(x) is the maximum probability of the observed sample, maximum being computed under the null hypothesis. On the other hand, the denominator of λ(x) is the maximum probability of the observed sample over all possible parameters.

39 s Hypothesis Testing: N(µ, σ 2 ) Method of inversion The ratio of these two maxima is small if there are parameter points in H 1 for which the observed sample is much more likely than for any parameter in H 0. In this situation, the LRT criterion says H 0 should be rejected and H 1 accepted as true.

40 Relation between LRT and MLE Hypothesis Testing: N(µ, σ 2 ) Method of inversion Suppose MLE of θ exists. Let ˆθ 0 be the MLE of θ in the null set Θ 0 (restricted maximization) and ˆθ be the MLE of θ in the full set Θ (unrestricted maximization). Then the LRT statistic, a function of x (not θ) is λ(x) = sup Θ 0 L(θ x) sup Θ L(θ x) = L(ˆθ 0 x) L(ˆθ x) In R{x : λ(x) c}, different c gives different rejection region and hence different tests.

41 LRT and sufficiency Outline Hypothesis Testing: N(µ, σ 2 ) Method of inversion Theorem If T (X ) is a sufficient statistic for θ, λ (t) is the LRT statistic based on T, and λ(x) is the LRT statistic based on x. Then λ (T (x)) = λ(x) for every x in the sample space. Thus the simplified expression for λ(x) should depend on x only through T (x) if T (X ) is a sufficient statistic for θ.

42 Example: Likelihood ratio test Hypothesis Testing: N(µ, σ 2 ) Method of inversion Let X 1, X 2,..., X n be a random sample from an exponential population with pdf: { exp ( (x θ)) x θ f (x θ) = 0 x < θ The likelihood function is: { exp ( L(θ x) = i x i + nθ) θ x (1) 0 θ > x (1)

43 Example: Likelihood ratio test Hypothesis Testing: N(µ, σ 2 ) Method of inversion Consider testing H 0 : θ θ 0 versus H 1 : θ > θ 0 where θ 0 is a value specified. The likelihood function is an increasing function of θ on < θ x (1). The likelihood ratio test statistics is { λ(x) = 1 x (1) θ 0 exp ( n(x (1) θ 0 ) ) x (1) > θ 0 An LRT, a test that rejects H 0 if λ(x) c, is a test with rejection region {x : x (1) θ 0 logc n }. Note that the rejection region depends on the sample only through the sufficient statistics X (1).

44 Hypothesis Testing: N(µ, σ 2 ) Method of inversion s and Newman Pearson lemma The asserts that, in general a best critical region can be found by finding the n dimensional points in the sample space for which the likelihood ratio is smaller than some constant.

45 Monotone Likelihood Ratio (MLR) Hypothesis Testing: N(µ, σ 2 ) Method of inversion Definition: A family of pdfs or pmfs {g(t; θ); θ Θ} for a univariate random variable T with real-valued parameter θ has a monotone likelihood ratio (MLR) if, for every θ 2 > θ 1, g(t, θ 2 )/g(t, θ 1 ) is an increasing function of t on {t : g(t, θ 1 ) > 0 or g(t, θ 2 ) > 0}.

46 Monotone Likelihood Ratio (MLR) Hypothesis Testing: N(µ, σ 2 ) Method of inversion Theorem Suppose that X has density f (x; θ) with MLR in T (x), then: 1 There exists a UMP level α test of H 0 : θ θ 0 versus H 1 : θ > θ 0 which is of the form C(x) = {T (x) k} 2 β(θ) is increasing in θ 3 For all θ this same test is the UMP level α = β(θ ) to test H 0 : θ θ versus H 1 : θ > θ 4 For all θ < θ 0, the test of (1) minimizes β(θ) among all tests of size α

47 N(µ, σ 2 ), with σ 2 known Hypothesis Testing: N(µ, σ 2 ) Method of inversion Let (X 1,..., X n ) be a random sample of i.i.d. random variables distributed as N(µ, σ 2 ) σ 2 known H 0 : µ = µ 0 vs H 1 : µ = µ 1 µ 0 > µ 1 { } X µ 0 R = z 1 α σ n H 0 : µ = µ 0 vs H 1 : µ < µ 0 { X µ 0 R = σ n z 1 α } H 0 : µ > µ 0 vs H 1 : µ < µ 0 { X µ0 R = σ n z 1 α }

48 N(µ, σ 2 ), with σ 2 known Hypothesis Testing: N(µ, σ 2 ) Method of inversion Let (X 1,..., X n ) be a random sample of i.i.d. random variables distributed as N(µ, σ 2 ) σ 2 known H 0 : µ = µ 0 vs H 1 : µ = µ 1 µ 0 < µ 1 { } X µ 0 R = z 1 α σ n H 0 : µ = µ 0 vs H 1 : µ > µ 0 { X µ 0 R = σ n z 1 α } H 0 : µ < µ 0 vs H 1 : µ > µ 0 { X µ0 R = σ n z 1 α }

49 N(µ, σ 2 ), with σ 2 known Hypothesis Testing: N(µ, σ 2 ) Method of inversion Let (X 1,..., X n ) be a random sample of i.i.d. random variables distributed as N(µ, σ 2 ) σ 2 known H 0 : µ = µ 0 vs H 1 : µ µ 0 R = { X µ 0 σ n } z 1 α/2

50 N(µ, σ 2 ), with σ 2 unknown Hypothesis Testing: N(µ, σ 2 ) Method of inversion Let (X 1,..., X n ) be a random sample of i.i.d. random variables distributed as N(µ, σ 2 ) σ 2 unknown H 0 : µ = µ 0 vs H 1 : µ = µ 1 µ 0 > µ 1 { } X µ 0 R = t1 α n 1 S n H 0 : µ = µ 0 vs H 1 : µ > µ 0 { X µ 0 R = S n t n 1 1 α } H 0 : µ > µ 0 vs H 1 : µ < µ 0 { X µ 0 R = S n t n 1 1 α }

51 N(µ, σ 2 ), with σ 2 unknown Hypothesis Testing: N(µ, σ 2 ) Method of inversion Let (X 1,..., X n ) be a random sample of i.i.d. random variables distributed as N(µ, σ 2 ) σ 2 unknown H 0 : µ = µ 0 vs H 1 : µ = µ 1 µ 0 < µ 1 { } X µ 0 R = t1 α n 1 S n H 0 : µ = µ 0 vs H 1 : µ > µ 0 { X µ 0 R = S n t n 1 1 α } H 0 : µ < µ 0 vs H 1 : µ > µ 0 { X µ 0 R = S n t n 1 1 α }

52 N(µ, σ 2 ), with σ 2 unknown Hypothesis Testing: N(µ, σ 2 ) Method of inversion Let (X 1,..., X n ) be a random sample of i.i.d. random variables distributed as N(µ, σ 2 ) σ 2 unknown H 0 : µ = µ 0 vs H 1 : µ µ 0 { X µ 0 R = tn 1 S n 1 α/2 }

53 Method of inversion Outline Hypothesis Testing: N(µ, σ 2 ) Method of inversion One to one correspondence between tests and confidence intervals. Hypothesis testing: Fix the parameter: asks what sample values (in the appropriate region) are consistent with that fixed value. Confidence set: Fix the sample value: asks what parameter values make this sample most plausible. For each θ 0 Θ, let A(θ 0 ) be the acceptance region of a level α test H 0 : θ = θ 0. Define a set C(x) = {θ 0 : x A(θ 0 )}. Then C(x) is a (1 α)-confidence set.

54 Hypothesis Testing: N(µ, σ 2 ) Method of inversion Hypothesis Testing and Confidence Intervals In general, inverting acceptance region of a two sided test will give two sided interval and inverting acceptance region of a one sided test will give an open end interval on one side. 1 Theorem Let acceptance region of a two sided test be of the form A(θ) = {x : c 1 (θ) T (x) c 1 (θ)} and let the cutoff be symmetric, that is, P θ (T (X ) > c 2 (θ)) = α/2 and P θ (T (X ) < c 1 (θ)) = α/2. If T has MLR property then both c 1 (θ) and c 2 (θ) are increasing in θ.

55 Outline Wald Test Score Test Application to Pareto Distribution Suppose we wish to test H 0 : θ = θ 0 against H 0 : θ θ 0. The likelihood-based give rise to several possible tests. Let logl(θ) denote the loglikelihood and ˆθ n the consistent root of the likelihood equation. Intuitively, the farther θ 0 is from ˆθ n, the stronger the evidence against the null hypothesis. But how far is far enough? Note that if θ 0 is close to ˆθ n, then logl(θ 0 ) should also be close to logl(ˆθ n ) and logl (θ 0 ) should also be close to logl (ˆθ n ) = 0.

56 Outline Wald Test Score Test Application to Pareto Distribution If we base a test upon the value of (ˆθ n θ 0 ), we obtain a Wald test. If we base a test upon the value of logl(ˆθ n ) logl(θ 0 ), we obtain a likelihood ratio test. If we base a test upon the value of logl (θ 0 ), we obtain a (Rao) score test.

57 Three asymptotic tests Wald Test Score Test Application to Pareto Distribution

58 Outline Wald Test Score Test Application to Pareto Distribution We argued that under certain regularity conditions, the following facts are true under H 0 : n(ˆθ n θ 0 ) N ( ) 1 0, I (θ 0 ) 1 n (logl (θ 0 )) N (0, I (θ 0 )) 1 n (logl (θ 0 )) I (θ 0 )

59 Wald Test Outline Wald Test Score Test Application to Pareto Distribution Under certain regularity conditions, the maximum likelihood estimator ˆθ has approximately in large samples a (multivariate) normal distribution with mean equal to the true parameter value and variance-covariance matrix given by the inverse of the information matrix, so that ˆθ N ( θ 0, ) 1 ni (θ 0 )

60 Wald Test Outline Wald Test Score Test Application to Pareto Distribution Wald test statistics (parameter θ scalar) (ˆθ θ 0 ) ni (θ 0 ) N(0, 1) If Var(ˆθ) is unknown a consistent estimator con be used (ˆθ θ 0 ) ni (ˆθ) N(0, 1) More generally, ˆθ θ 0 N(0, 1) Var(ˆθ)

61 Wald Test Outline Wald Test Score Test Application to Pareto Distribution When θ is a parameter of dimension p. This result provides a basis for constructing tests of hypotheses and confidence regions. For example under the hypothesis H 0 : θ = θ 0 for a fixed value θ 0, the quadratic form W = (ˆθ θ 0 ) var 1 (ˆθ)(ˆθ θ 0 ) has approximately in large samples a chi-squared distribution with p degrees of freedom.

62 Score Test Outline Wald Test Score Test Application to Pareto Distribution Under some regularity conditions the score itself has an asymptotic normal distribution with mean 0 and variance equal to the information matrix, so that u(θ) N(0, I n (θ))

63 Score Test Statistcs Outline Wald Test Score Test Application to Pareto Distribution u(θ 0 ) N(0, I n (θ 0 )) logl(θ x 1,...,x n) θ ni (θ0 ) N(0, 1)

64 Score Test Outline Wald Test Score Test Application to Pareto Distribution When θ is a parameter of dimension p. Under some regularity conditions the score itself has an asymptotic normal distribution with mean 0 and variance-covariance matrix equal to the information matrix, so that u(θ) N p (0, I n (θ))

65 Score Test Outline Wald Test Score Test Application to Pareto Distribution This result provides another basis for constructing tests of hypotheses and confidence regions. For example under H 0 : θ = θ 0 for a fixed value θ 0, the quadratic form Q = (u(θ 0 )) I 1 (θ 0 )u(θ 0 ) has approximately in large samples a chi-squared distribution with p degrees of freedom.

66 s Wald Test Score Test Application to Pareto Distribution Under certain regularity conditions, minus twice the log of the likelihood ratio has approximately in large samples a chi-square distribution 2logλ = 2logL(ˆθ, x) 2logL(θ 0, x) In case θ is a scalar, 2logλ has a chi-square distribution with 1 degree of freedom.

67 s Wald Test Score Test Application to Pareto Distribution Under certain regularity conditions, minus twice the log of the likelihood ratio has approximately in large samples a chi-square distribution with degrees of freedom equal to the difference in the number of parameters between the two models. Thus, 2logλ = 2logL(ˆθ ω2, x) 2logL(ˆθ ω1, x) where the degrees of freedom are ν = dim(ω 2 ) dim(ω 1 ), the number of parameters in the larger model ω 2 minus the number of parameters in the smaller model ω 1.

68 Outline Wald Test Score Test Application to Pareto Distribution The LR, Wald, and Score tests (the trinity of test statistics) require different models to be estimated. LR test requires both the restricted and unrestricted models to be estimated Wald test requires only the unrestricted model to be estimated. Score test requires only the restricted model to be estimated. Applicability of each test then depends on the nature of the hypotheses. For H 0 : θ = θ 0, the restricted model is trivial to estimate, and so LR or Score test might be preferred. For H 0 : θ > θ 0, restricted model is a constrained maximization problem, so Wald test might be preferred.

69 Outline Wald Test Score Test Application to Pareto Distribution The score test does not require ˆθ n whereas the other two tests do. The Wald test is most easily interpretable and yields immediate confidence intervals. The score test and likelihood ratio test are invariant under reparameterization, whereas the Wald test is not.

70 Pareto Outline Wald Test Score Test Application to Pareto Distribution Let (X 1,..., X n ) be a random sample of i.i.d. random variables distributed as a Pareto distribution with parameters α and x m f (x; α, x m ) = α x α m x α+1 for x x m.

71 Pareto Outline Wald Test Score Test Application to Pareto Distribution The Pareto distribution is used to describe the distribution of income in the top portion of the income distribution. Therefore, if x is income, the pdf is defined for incomes in excess of x m. It is usually used to describe the income in higher income groups. E(X ) = αx m α 1 Consider a sample of people of 100 family with incomes in excess of 50, 000$ and suppose we want to test H 0, α = 5 versus an alternative H 0, α 5. Suppose (x 1,..., x 100 ) has an observed sample mean of 60, 000$ and observed mean of logarithm equal to

72 Pareto Outline Wald Test Score Test Application to Pareto Distribution Observe that the joint pdf of X = (X 1,..., X n ) f (x; α, x m ) = n α xm α x α+1 i=1 i = α n xm nα n i=1 x α+1 i = g(t, α)h(x) where t = n i=1 x i g(t, α) = cα n xm nα t (α+1) and h(x) = 1. By the factorization theorem, T (X ) = n i=1 X i is sufficient for α.

73 Pareto Outline Wald Test Score Test Application to Pareto Distribution The log-likelihood function is Thus l(α, x m ) = nlogα + nαlog(x m ) (α + 1) δl(α, x m ) δα = n α + nlog(x m) n logx i i=1 n logx i Solving for δl(α,xm) δα = 0, the mle of α is given by ˆα = n n i=1 logx i nlog(x m ) = 1 logx i log(x m ) ˆα = 4 i=1

74 Pareto Outline Wald Test Score Test Application to Pareto Distribution δl(α, x m ) δα δ 2 l(α, x m ) δα 2 = n α 2 = n α + nlog(x m)) n logx i i=1

75 Pareto Outline Wald Test Score Test Application to Pareto Distribution δl(α, x m ) δα δ 2 l(α, x m ) δα 2 = n α 2 Var(ˆα) = = α2 n 1 I n (α 0 ) = I n (ˆα) = 0.16 = n α + nlog(x m)) n logx i i=1

76 Pareto- Wald Test Outline Wald Test Score Test Application to Pareto Distribution ˆα N (ˆα α 0 ) 2 I n (θ 0 ) χ 1 ( ) 1 α 0, ni (α 0 ) Observed value of test statistics = 2 p value = (4 5) = 4 p value =

77 Pareto- score Test Outline Wald Test Score Test Application to Pareto Distribution l (α 0 ) N (0, ni (α 0 )) Observed value of test statistics δl(α, x m ) δα = n α + nlog(x m)) n logx i l (α 0 ) = log(50000) = l (α 0 ) = = pvalue = ni (α0 ) 2 i=1

78 Pareto- Likelihood ratio Test Wald Test Score Test Application to Pareto Distribution l(α) = nlogα + nαlog(x m ) (α + 1) l(5) = l(4) = (l(ˆα) l(α 0 )) = p value = n logx i i=1

79 Pareto- Large sample test Wald Test Score Test Application to Pareto Distribution WALD Test statistics n n i=1 logx i nlog(x m) α 0 α 2 0 n approximate distribution N(0, 1) Score test statistics n α 0 + nlog(x m ) n i=1 logx i n α 2 0 approximate distribution N(0, 1)

80 Pareto- Large sample test Wald Test Score Test Application to Pareto Distribution Statistics ( ( )) 2 nlog ˆα n + (ˆα α 0 ) nlog(x m ) logx i α 0 i=1 approximate distribution χ 1

81 Outline Application to Poisson Distribution A common complaint about Hypothesis Testing is the he choice of the significance level α, that is essentially arbitrary. To make matters worse, the conclusions (Reject or Dont Reject H ) depend on what value of α is used.

82 Outline Application to Poisson Distribution Definition: A p(x) is a test statistic satisfying 0 p(x) 1 for every sample point x. Small values of p(x) give evidence that H 1 is true. Theorem: Let W (X ) be a test statistic such that large value of W give evidence that H 1 is true. For each sample point x, define p(x) = sup θ Θ 0 P θ (W (X ) W (x)) then p(x ) is a valid.

83 Steps to find Outline Application to Poisson Distribution 1 Let T be the test statistic. 2 Compute the value of T using the sample x 1,..., x n. Say it is a. 3 The is given by p value = P(T < a H 0 ), P(T > a H 0 ), P( T > a H 0 ), if lower tail test if upper tail test if two tail test

84 Outline Application to Poisson Distribution The smallest level at which H 0 would be rejected. The the probability that you would obtain evidence against H 0 which is at least as strong as that actually observed, if H 0 were true. The smaller the p value, the stronger the evidence that H 0 is false.

85 Example: psychiatrist problem Application to Poisson Distribution A psychiatrist wants to determine if a small amount of coke decreases the reaction time in adults. The reaction time for a specified test is assumed normally distributed and has mean equal to µ = 0.15 seconds and σ = 0.2. A sample of 81 people were given a small amount of coke and their reaction time averaged 0.11 seconds. Does this sample result support the psychiatrist s hypothesis?

86 Example: psychiatrist problem Application to Poisson Distribution The reaction time is expected to decrease What are null and alternative hypothesis? H 0 : µ = 0.15 H 1 : µ < 0.15 If the null hypothesis were true, could I observe a sample mean of 0.11 from a sampling distribution centered in 0.15 and standard deviation σ = 0.2? P ( X < 0.11 µ = 0.15 ) = P ( X < ) =

87 Example: psychiatrist problem Application to Poisson Distribution Population Hypothesis Testing I believe the population mean age is 50 (hypothesis). Random sample Mean X = 20 Reject hypothesis! Not close.

88 Example: psychiatrist problem Application to Poisson Distribution Is I believe the population reaction time is 0.15 (Hypothesis) x = 0.11< µ = 0.15? Hypothesis Testing Process The Sample Mean Is 0.11 Population yes no REJECT Hypothesis ACCEPT Sample

89 Example: psychiatrist problem Application to Poisson Distribution Sampling Distribution It is unlikely that we would get a sample mean of this value if in fact this were the population mean.... Therefore, we reject the null hypothesis that µ = µ =0.15 H 0 Sample Mean

90 Application to Poisson Distribution : Example: psychiatrist problem represents the probability to observe a sample mean with a more extreme value than Probability of obtaining a test statistic as extreme as, or more extreme, ( or, ) than the actual sample value, given H 0 is true If we think the is too low to believe the observed test statistic is obtained by chance only, then we would reject chance (reject the null hypothesis). Otherwise, we fail to reject chance and do not reject the null hypothesis.

91 Application to Poisson Distribution Example: psychiatrist problem: Decision = represents the probability to observe a sample mean with a more extreme value than 0.11 If α = 0.05, we reject null hypothesis, coke does have effect on time reaction (coke decreases reaction time) If α = 0.01, we DO NOT reject null hypothesis, coke does NOT have effect on time reaction

92 Application to Poisson Distribution Application to Poisson Distribution Let (X 1,..., X n ) be a random sample of i.i.d. random variables distributed as a Poisson distribution with parameter λ f (x; λ) = λx exp( λ) x! We wish to test H 0 : λ = 6 versus H 1 : λ 6 We observe a random sample (X 1,..., X 100 ), such that i x i = 550

93 Application to Poisson Distribution Application to Poisson Distribution Observe that the joint pdf of X = (X 1,..., X n ) is give by f (x 1,..., x : n; λ) = i λ x i exp( λ) x i! L(λ) = λ i x i exp( nλ) i x i! l(λ) = x i log(λ) nλ log i dl(λ) i = x i n λ λ d 2 l(λ) i λ 2 = x i λ 2 ˆλ = x = 5.5 ( ) x i! i

94 Application to Poisson Distribution Application to Poisson Distribution l(λ) = ( ) x i log(λ) nλ log x i! i i dl(λ) i = x i n λ λ d 2 l(λ) i λ 2 = x i λ ( 2 d 2 ) l(λ) I n (λ) = E λ 2 = n λ ˆλ = x

95 Poisson- Large sample test Application to Poisson Distribution WALD Test statistics x λ 0 λ 0 n approximate distribution N(0, 1) = x λ 0 λ 0 n = =

96 Poisson- Large sample test Application to Poisson Distribution Score test statistics i x i λ 0 n λ0 n approximate distribution N(0, 1)

97 Poisson- Large sample test Application to Poisson Distribution Statistics ( ( ) 2 l(ˆλ) l(λ 0 ) = 2 x i log ˆλ ) n(ˆλ λ 0 ) λ 0 i ( ) Λ(x) = 2 x i log x n( x λ 0 ) λ 0 = p value = i approximate distribution χ 1

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49 4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

Chapter 4. Theory of Tests. 4.1 Introduction

Chapter 4. Theory of Tests. 4.1 Introduction Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule

More information

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.

More information

Lecture 21: October 19

Lecture 21: October 19 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu.30 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. .30

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

F79SM STATISTICAL METHODS

F79SM STATISTICAL METHODS F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is

More information

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X). 4. Interval estimation The goal for interval estimation is to specify the accurary of an estimate. A 1 α confidence set for a parameter θ is a set C(X) in the parameter space Θ, depending only on X, such

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Throughout this chapter we consider a sample X taken from a population indexed by θ Θ R k. Instead of estimating the unknown parameter, we

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

Statistics and econometrics

Statistics and econometrics 1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample

More information

Loglikelihood and Confidence Intervals

Loglikelihood and Confidence Intervals Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,

More information

Topic 19 Extensions on the Likelihood Ratio

Topic 19 Extensions on the Likelihood Ratio Topic 19 Extensions on the Likelihood Ratio Two-Sided Tests 1 / 12 Outline Overview Normal Observations Power Analysis 2 / 12 Overview The likelihood ratio test is a popular choice for composite hypothesis

More information

INTERVAL ESTIMATION AND HYPOTHESES TESTING

INTERVAL ESTIMATION AND HYPOTHESES TESTING INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline)

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) Gujarati D. Basic Econometrics, Appendix A.8 Barrow M. Statistics

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

Review. December 4 th, Review

Review. December 4 th, Review December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter

More information

Hypothesis Testing Chap 10p460

Hypothesis Testing Chap 10p460 Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies

More information

Notes on the Multivariate Normal and Related Topics

Notes on the Multivariate Normal and Related Topics Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions

More information

Mathematical statistics

Mathematical statistics October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Lecture 26: Likelihood ratio tests

Lecture 26: Likelihood ratio tests Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood

More information

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)? ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

Composite Hypotheses and Generalized Likelihood Ratio Tests

Composite Hypotheses and Generalized Likelihood Ratio Tests Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve

More information

Lecture 21. Hypothesis Testing II

Lecture 21. Hypothesis Testing II Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric

More information

Lecture 10: Generalized likelihood ratio test

Lecture 10: Generalized likelihood ratio test Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual

More information

Interval Estimation. Chapter 9

Interval Estimation. Chapter 9 Chapter 9 Interval Estimation 9.1 Introduction Definition 9.1.1 An interval estimate of a real-values parameter θ is any pair of functions, L(x 1,..., x n ) and U(x 1,..., x n ), of a sample that satisfy

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Methods for Statistical Prediction Financial Time Series I. Topic 1: Review on Hypothesis Testing

Methods for Statistical Prediction Financial Time Series I. Topic 1: Review on Hypothesis Testing Methods for Statistical Prediction Financial Time Series I Topic 1: Review on Hypothesis Testing Hung Chen Department of Mathematics National Taiwan University 9/26/2002 OUTLINE 1. Fundamental Concepts

More information

8. Hypothesis Testing

8. Hypothesis Testing FE661 - Statistical Methods for Financial Engineering 8. Hypothesis Testing Jitkomut Songsiri introduction Wald test likelihood-based tests significance test for linear regression 8-1 Introduction elements

More information

STAT 514 Solutions to Assignment #6

STAT 514 Solutions to Assignment #6 STAT 514 Solutions to Assignment #6 Question 1: Suppose that X 1,..., X n are a simple random sample from a Weibull distribution with density function f θ x) = θcx c 1 exp{ θx c }I{x > 0} for some fixed

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants 18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:

More information

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test. Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a

More information

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Econ 325: Introduction to Empirical Economics

Econ 325: Introduction to Empirical Economics Econ 325: Introduction to Empirical Economics Chapter 9 Hypothesis Testing: Single Population Ch. 9-1 9.1 What is a Hypothesis? A hypothesis is a claim (assumption) about a population parameter: population

More information

ST495: Survival Analysis: Hypothesis testing and confidence intervals

ST495: Survival Analysis: Hypothesis testing and confidence intervals ST495: Survival Analysis: Hypothesis testing and confidence intervals Eric B. Laber Department of Statistics, North Carolina State University April 3, 2014 I remember that one fateful day when Coach took

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

The outline for Unit 3

The outline for Unit 3 The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

Testing Restrictions and Comparing Models

Testing Restrictions and Comparing Models Econ. 513, Time Series Econometrics Fall 00 Chris Sims Testing Restrictions and Comparing Models 1. THE PROBLEM We consider here the problem of comparing two parametric models for the data X, defined by

More information

Topic 17: Simple Hypotheses

Topic 17: Simple Hypotheses Topic 17: November, 2011 1 Overview and Terminology Statistical hypothesis testing is designed to address the question: Do the data provide sufficient evidence to conclude that we must depart from our

More information

Hypothesis Testing (May 30, 2016)

Hypothesis Testing (May 30, 2016) Ch. 5 Hypothesis Testing (May 30, 2016) 1 Introduction Inference, so far as we have seen, often take the form of numerical estimates, either as single points as confidence intervals. But not always. In

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

Stat 710: Mathematical Statistics Lecture 27

Stat 710: Mathematical Statistics Lecture 27 Stat 710: Mathematical Statistics Lecture 27 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 27 April 3, 2009 1 / 10 Lecture 27:

More information

MTMS Mathematical Statistics

MTMS Mathematical Statistics MTMS.01.099 Mathematical Statistics Lecture 12. Hypothesis testing. Power function. Approximation of Normal distribution and application to Binomial distribution Tõnu Kollo Fall 2016 Hypothesis Testing

More information

Introduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007

Introduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007 Hypothesis Testing Daniel Schmierer Econ 312 March 30, 2007 Basics Parameter of interest: θ Θ Structure of the test: H 0 : θ Θ 0 H 1 : θ Θ 1 for some sets Θ 0, Θ 1 Θ where Θ 0 Θ 1 = (often Θ 1 = Θ Θ 0

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1 Estimation September 24, 2018 STAT 151 Class 6 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 1 1

More information

Introductory Econometrics. Review of statistics (Part II: Inference)

Introductory Econometrics. Review of statistics (Part II: Inference) Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

LECTURE 5 HYPOTHESIS TESTING

LECTURE 5 HYPOTHESIS TESTING October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.

More information

Political Science 236 Hypothesis Testing: Review and Bootstrapping

Political Science 236 Hypothesis Testing: Review and Bootstrapping Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The

More information

HT Introduction. P(X i = x i ) = e λ λ x i

HT Introduction. P(X i = x i ) = e λ λ x i MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework

More information

Introductory Econometrics

Introductory Econometrics Session 4 - Testing hypotheses Roland Sciences Po July 2011 Motivation After estimation, delivering information involves testing hypotheses Did this drug had any effect on the survival rate? Is this drug

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

Partitioning the Parameter Space. Topic 18 Composite Hypotheses Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2017 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

10. Composite Hypothesis Testing. ECE 830, Spring 2014

10. Composite Hypothesis Testing. ECE 830, Spring 2014 10. Composite Hypothesis Testing ECE 830, Spring 2014 1 / 25 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve unknown parameters

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information