Statistical Hypothesis Testing Dr. Phillip YAM 2012/2013 Spring Semester Reference: Chapter 7 of Tests of Statistical Hypotheses by Hogg and Tanis.
Section 7.1 Tests about Proportions A statistical hypothesis test is a formal method of making decisions, upon the probabilistic structure of a random mathematical model, by analyzing the available sample Example (Simple) Null hypothesis: H 0 : p = 0.06 completely specifies the distribution Against (Composite) Alternative hypothesis: H 1 : p < 0.06 does not completely specify the distribution; it is composed of many simple hypotheses Possible error: Type I error: Rejecting H 0 and accepting H 1 when H 0 is true; Type II error: Failing to reject H 0 when H 1 is true (i.e., when H 0 is false).
Section 7.1 Tests about Proportions Consider the test of H 0 : p = p 0 against H 1 : p > p 0, where p 0 = probability of success. Base our test upon the number of successes Y in n independent Bernoulli trials. Using CLT, Y /n has an approximate normal distribution N[p 0, p 0 (1 p 0 )/n], provided that H 0 : p = p 0 is true and n is large. We intend to reject H 0 and accepts H 1 if and only if Z = Y /n p 0 p0 (1 p 0 )/n z α. That is to say: if Y /n exceeds p 0 by z α standard deviations of Y /n, we reject H 0 and accept the hypothesis H 1 : p > p 0. The approximate probability of this occurring when H 0 : p = p 0 is true is α. The significance level of this test is approximately α.
Section 7.1 Tests about Proportions Example 7.1-1: Many commercially manufactured dice are not fair because the spots are really indentations, so that, for example, the 6-side is lighter then the 1-side. Let p = the probability of rolling a 6. To test H 0 : p = 1/6 against the alternative hypothesis H 1 : p > 1/6. Suppose that we have a total of n = 8000 observations. Let Y equal the number of times that 6 resulted in the 8000 trials. The results of the experiment yielded y = 1389, so the calculated value of the test statistic is z = 1389/8000 1/6 (1/6)(5/6)/8000 = 1.670 > 1.645 = z 0.05. and hence, the null hypothesis is rejected, and the experimental results indicate that these dice favor a 6 more than a fair die would be.
Section 7.1 Tests about Proportions Formal Statistical Hypothesis Testing can be regarded as a statistical version of Mathematical Proof by Contradiction. An example of the latter is Euclid s proof of infinitely many primes. Analogy: (1) H 1 VS H 0 (C 1 : Infinitely Many Primes VS C 0 Finitely Many Primes ) (2) A random sequence of sample X 1,..., X n under H 0 (A finite deterministic sequence of primes p 1,..., p n under C 0 ) Y /n p 0 (3) Functional inequality Z = p0 (1 p 0 )/n z α (a new positive integer p = p 1 p n + 1) (4) Definite conclusion (conclusion subject to chance) A reasonably good test for a parameter normally relies on the maximum likelihood estimator (more precisely, the sufficient statistic) for the parameter.
Section 7.1 Tests about Proportions One-sided tests: H 0 : p = p 0 against H 1 : p < p 0 and H 0 : p = p 0 against H 1 : p > p 0 Two-sided tests: H 1 : p p 0. In the Example 7.1-1, a test with the approximate significance level α for doing this is to reject H 0 : p = p 0 against H 1 : p p 0 if Z = Y /n p 0 p0 (1 p 0 )/n z α/2, since, under H 0, P( Z z α/2 ) α. The rejection region for H 0 is often called the critical region. The p-value associated with a test is the probability, under the null hypothesis H 0, that the test statistic (a random variable) is equal to or exceeds the observed value (a constant) of the test statistic in the direction of the alternative hypothesis.
Section 7.1 Tests about Proportions Test about difference of two proportions: let Y 1 and Y 2 represent, respectively, the numbers of observed successes in n 1 and n 2 independent trials with probabilities of success p 1 and p 2. To test H 0 : p 1 p 2 = 0 or, equivalently, H 0 : p 1 = p 2, let p = p 1 = p 2 be the common value under H 0. p 1 = Y 1 /n 1 is approximately N[p 1, p 1 (1 p 1 )/n 1 ], p 2 = Y 2 /n 2 is approximately N[p 2, p 2 (1 p 2 )/n 2 ], and p 1 p 2 = Y 1 /n 1 Y 2 /n 2 is approximately N[p 1 p 2, p 1 (1 p 1 )/n 1 + p 2 (1 p 2 )/n 2 ] Estimate p with p = (Y 1 + Y 2 )/(n 1 + n 2 ) Rely a test on a test statistic: Z = p 1 p 2 0 p(1 p)(1/n1 + 1/n 2 ), which has an approximate N(0, 1) distribution when the null hypothesis is true.
Section 7.1 Tests about Proportions Remark: In testing both H 0 : p = p 0 and H 0 : p 1 = p 2, statisticians sometimes use different denominators for z. For tests of single proportions, p 0 (1 p 0 )/n can be replaced by (y/n)(1 y/n)/n, and for tests of the equality of two proportions, the following denominator can be used: p 1 (1 p 1 ) + p 2(1 p 2 ). n 1 n 2 In general, it is difficult to say that one is better than the other; fortunately, the numerical answers are about the same.
Section 7.2 Tests about One Mean To test which of the two hypotheses, H 0 or H 1, is true, it is necessary to partition the sample space into two parts, C and C, such that if (x 1, x 2,..., x n ) C, H 0 is rejected, and if (x 1, x 2,..., x n ) C, H 0 is accepted (not rejected). The rejection region C for H 0 is called the critical region for the test. The partitioning of the sample space is specified in terms of the values of a test statistic Type I error: If (x 1, x 2,..., x n ) C when H 0 is true. The probability of a Type I error is called the significance level of the test and is denoted by α, i.e. α = P[(X 1, X 2,..., X n ) C; H 0 ] Type II error: If (x 1, x 2,..., x n ) C when H 1 is true. The probability of a Type II error is denoted by β; β = P[(X 1, X 2,..., X n ) C ; H 1 ]
Section 7.2 Tests about One Mean A decrease in the size of α leads to an increase in the size of β. Both α and β can be decreased if the sample size n is increased.
Section 7.2 Tests about One Mean Sampling from a normal distribution, the null hypothesis is generally of the form H 0 : µ = µ 0. Three possibilities for the alternative hypothesis: i) µ has increased, or H 1 : µ > µ 0 ; ii) µ has decreased, or H 1 : µ < µ 0 ; iii) µ has changed, but it is not known whether it has increased or decreased; two-sided alternative hypothesis: H 1 : µ µ 0. A random sample is taken from the distribution. Observed sample mean, x, that is close (measured in terms of standard deviations of X, σ/ n) to µ 0 supports H 0 (I) When the variance is known, consider a test statistic, Z = X µ 0 σ 2 /n = X µ 0 σ/ n, and critical regions, at a significance level α, for the three respective alternative hypotheses would be (i) z z α, (ii), z z α and (iii) z z α/2.
Section 7.2 Tests about One Mean (II) When the variance is not known, we consider the test statistic: T = X µ S 2 /n = X µ S/ n. The rule that rejects H 0 : µ = µ 0 and accepts H 1 ; µ µ 0 if and only if t = x µ 0 s/ n t α/2(n 1) General comment: many statisticians believe that the observed p-value provides an understandable measure of the truth of H 0 : The smaller the p-value, the less they believe in H 0. We do not reject H 0 if the confidence interval covers µ; otherwise, we would have to reject H 0. Many statisticians believe that estimation is much more important than tests of hypotheses and accordingly approach statistical tests through confidence intervals.
Section 7.3 Tests of the Equality of Two Means A sample: (X 1, Y 1 ),..., (X n, Y n ). If X and Y are dependent, for example, patient s records before and after a treatment. Let W = X Y, and the hypothesis that H 0 : µ X = µ Y would be replaced with the hypothesis H 0 : µ W = 0. (I) X and Y are independent and normally distributed. Assumed that the variances of X and Y were equal. X Y T = {[(n 1)SX 2 + (m 1)S Y 2 ]/(n + m 2)}(1/n + 1/m) = X Y S p 1/n + 1/m, S p = (n 1)S 2 X +(m 1)S 2 Y n+m 2. T has a t distribution with r = n + m 2 degrees of freedom when H 0 is true and the variances are (approximately) equal.
Section 7.3 Tests of the Equality of Two Means If the common-variance assumption is violated, but not too badly, the test is satisfactory, but the significance levels are only approximate. (II) If both the variances of X and Y are unequal yet they are known, then the appropriate test statistic to use for testing H 0 : µ X = µ Y is Z = X Y, σ 2 X n + σ2 Y m which has a standard normal distribution when the null hypothesis is true. (III) If the variances are unknown and unequal, and the sample sizes are large, replace σ 2 X with S 2 X and σ2 Y with S 2 Y in the above equation. The resulting statistic will have an approximate N(0, 1) distribution.
Section 7.3 Tests of the Equality of Two Means As long as the underlying distributions are not highly skewed, the normal assumptions are not too critical. As distributions become non-normal and highly skewed, the sample mean and sample variance become more dependent. Some of the nonparametric methods have to be used. When the distributions are close to normal, but the variances seem to differ by a great deal, the t statistic should again be avoided, particularly if the same sizes are also different. (IV) Different values of variances and with small sample size, use Welch s t-statistic.
Example on Hypothesis Testing: Classroom activities Source from Beau Lotto Exercises: (1) A single mean test for each class; (2) Test of the equality of means from two classes.
Section 7.4 Tests for Variances (I) Test of hypothesis for a single variance, H 0 : σ 2 X = σ2 0, with normal distributions: the critical region is also given in terms of the chi-square test statistic χ 2 (n 1)S 2 =. (II) Test for the equality of two variances, H 0 : σ 2 X /σ2 Y = 1, from normal populations. Two random samples of n observations of X and m observations of Y. When H 0 is true, F = σ 2 0 (n 1)SX 2 σx 2 (n 1) (m 1)SY 2 σy 2 (m 1) = S 2 X S 2 Y has an F distribution with r 1 = n 1 and r 2 = m 1 degrees of freedom. If H 0 is true, the observed value of F is expected to be close to 1.
Section 7.5 One-Factor Analysis of Variance (ANOVA) Experimenters want to compare more than two treatments, e.g. yields of several different corn hybrids, results due to three or more teaching techniques, or miles per gallon obtained from many different types of compact cars, consumptions from different class (upper, middle, or lower). Consider m normal distributions with unknown means µ 1, µ 2,..., µ m and an unknown, but common, variance σ 2. A test of the equality of the m means, namely, H 0 : µ 1 = µ 2 = = µ m = µ, with µ unspecified, against all possible alternative hypotheses H 1.
Section 7.5 One-Factor Analysis of Variance (ANOVA) Let X i1, X i2,, X ini represent a random sample of size n i from the normal distribution N(µ i, σ 2 ), i = 1, 2,..., m. With n = n 1 + n 2 + + n m, we denote sample means by: X.. = 1 m n i X ij and X i. = 1 n i X ij, i = 1, 2,..., m. n n i i=1 j=1 j=1 SS(TO) = = = m n i (X ij X.. ) 2 i=1 j=1 n i m (X ij X i. + X i. X.. ) 2 i=1 j=1 n i m (X ij X i. ) 2 + i=1 j=1 + 2 m n i (X i. X.. ) 2 i=1 j=1 m n i (X ij X i. )(X i. X.. ). i=1 j=1
Section 7.5 One-Factor Analysis of Variance (ANOVA) Using the facts: m n i m 2 (X i. X.. ) (X ij X i. ) = 2 (X i. X.. )(n i X i. n i X i. ) i=1 j=1 i=1 = 0, and m n i (X i. X.. ) 2 = i=1 j=1 We deduce that m n i (X i. X.. ) 2. i=1 SS(TO) = m n i m (X ij X i. ) 2 + n i (X i. X.. ) 2. i=1 j=1 i=1
Section 7.5 One-Factor Analysis of Variance (ANOVA) SS(TO) = m n i (X ij X.. ) 2, the total sum of squares; i=1 j=1 SS(E) = m n i (X ij X i. ) 2, the sum of squares within treatments, i=1 j=1 SS(T ) = m n i (X i. X.. ) 2, i=1 groups, or classes, often called the error sum of squares; the sum of squares among the different treatments, groups, or classes, often called the between-treatment sum of squares. SS(TO) = SS(E) + SS(T ).
Section 7.5 One-Factor Analysis of Variance (ANOVA) SS(TO)/σ 2 is χ 2 (n 1), so E[SS(TO)/(n 1)] = σ 2. n i j=1 (X ij X i. ) 2 W i = for i = 1, 2,..., m, n i 1 (n i 1)W i /σ 2 is χ 2 (n i 1). Therefore, no matter H 0 is true or not, m (n i 1)W i σ 2 = SS(E) σ 2, i=1 is also chi-square with (n 1 1) + (n 2 1) + + (n m 1) = n m degrees of freedom. SS(TO) σ 2 where SS(TO) σ 2 is χ 2 (n 1) and = SS(E) σ 2 + SS(T ) σ 2, SS(E) σ 2 is χ 2 (n m).
Section 7.5 One-Factor Analysis of Variance (ANOVA) (Theorem 7.5-1) Let Q = Q 1 + Q 2 + + Q k, where Q, Q 1,..., Q k are k + 1 real quadratic forms in n mutually independent (mean zero) random variables normally distributed with the same variance σ 2. Let Q/σ 2, Q 1 /σ 2,..., Q k 1 /σ 2 have chi-square distributions with r, r 1,..., r k 1 degrees of freedom, respectively. If Q k is nonnegative, then (a) Q 1,..., Q k are mutually independent, and hence, (b) Q k /σ 2 has a chi-square distribution with r (r 1 + + r k 1 ) = r k degrees of freedom. Applications: (1) Re-deriving: (1) The independence of X and S 2 ; (2) Distribution of (n 1)S 2 /σ 2. (2) Because SS(T ) 0, applying the Theorem, we deduce that SS(E) and SS(T ) are independent and the distribution of SS(T )/σ 2 is χ 2 (m 1).
Section 7.5 One-Factor Analysis of Variance (ANOVA) Back to testing H 0 : µ 1 = µ 2 = = µ m = µ Note that SS(E)/(n m) is always unbiased no matter whether H 0 is true or false If µ 1, µ 2,..., µ m are not equal, the expected value of the estimator that is based on SS(T ) will be greater than σ 2. [ m ] [ m ] E[SS(T )] = E n i (X i. X.. ) 2 = E n i X 2 i. nx 2.. = = i=1 i=1 m n i {Var(X i. ) + [E(X i. )] 2 } n{var(x.. ) + [E(X.. )] 2 } i=1 m i=1 n i { σ 2 n i = (m 1)σ 2 + + µ 2 i } { } σ 2 n n + µ2 m n i (µ i µ) 2, i=1 where µ = (1/n) m i=1 n iµ i..
Section 7.5 One-Factor Analysis of Variance (ANOVA) If µ 1 = µ 2 = = µ m = µ, then ( ) SS(T ) E = σ 2. m 1 If the means are not all equal, then E ( ) SS(T ) = σ 2 + m 1 m i=1 n i (µ i µ) 2 m 1 > σ 2. Base our test of H 0 on the ratio of SS(T )/(m 1) and SS(E)/(n m), both of which are unbiased estimators of σ 2, under H 0, the ratio would assume values near 1. In the case that the means µ 1, µ 2,..., µ m begin to differ, this ratio tends to become large, since E[SS(T )/(m 1)] gets larger.
Section 7.5 One-Factor Analysis of Variance (ANOVA) Under H 0, SS(T )/(m 1) SS(E)/(n m) = [SS(T )/σ2 ]/(m 1) [SS(E)/σ 2 ]/(n m) = F has an F distribution with m 1 and n m degrees of freedom because SS(T )/σ 2 and SS(E)/σ 2 are independent chi-square variables. We shall reject H 0 if the observed value of F is too large, and the critical region is of the form F F α (m 1, n m).
Section 7.5 One-Factor Analysis of Variance (ANOVA) Alternative formulas: SS(TO) = SS(T ) = m n i i=1 j=1 m 1 n i i=1 X 2 ij 1 n n i j=1 X ij m n i SS(E) = SS(TO) SS(T ). 2 i=1 j=1 X ij 2 1 m n i n, i=1 j=1 F test works quite well even if the underlying distributions are non-normal, unless they are highly skewed or the variances are quite different. X ij 2,
Section 7.5 One-Factor Analysis of Variance (ANOVA) For only 2 populations, comparison with T-test for a symmetric Two-sided test: under common variance assumption T = ( 1 X Ȳ )/ n + 1 m (n 1)S 2 X +(m 1)SY 2 n+m 2 The square of a t-statistic T 2 is a F-statistic with degrees of freedom 1 and n + m 2. Also note that: (n 1)SX 2 + (m 1)S Y 2 = n i=1 (x i x) 2 + m i=1 (y i ȳ) 2 = SS(E) ( x ȳ) 2 1 n + 1 m T 2 = = n( x = SS(T ) n x + mȳ n x + mȳ n + m )2 + m(ȳ n + m )2 SS(T )/1 = F (2 1, n + m 2) SS(E)/(n + m 2)
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) Dependence of real estate prices on districts and ages of the buildings. Assume that there are two factors (attributes), one of which has a levels and the other b levels. X ij is N(µ ij, σ 2 ), i = 1, 2,..., a, and j = 1, 2,..., b; and the n = ab random variables are independent. Assume that the means µ ij are composed of a row effect, a column effect, and an overall effect in some additive way, namely, µ ij = µ + α i + β j, where a i=1 α i = 0 and b j=1 β j = 0. The parameter α i represents the i th row effect, and the parameter β j represents the j th column effect. (a) Test the hypothesis that there is no row effect. Test H A : α 1 = α 2 = = α a = 0, since a i=1 α i = 0. (b) Test the there is no column effect, we would test H B : β 1 = β 2 = = β b = 0, since b j=1 β j = 0.
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) Consider the sum of squares: SS(TO) = = = b a i=1 j=1 a i=1 j=1 b (X ij X.. ) 2 b [(X i. X.. ) + (X.j X.. ) + (X ij X i. X.j + X.. )] 2 a (X i. X.. ) 2 + a i=1 + a i=1 j=1 b (X.j X.. ) 2 j=1 b (X ij X i. X.j + X.. ) 2 = SS(A) + SS(B) + SS(E),
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) The distribution of the error sum of squares SS(E) does not depend on the mean µ ij, provided that the additive model is correct. Hence, its distribution is the same whether H A or H B is true or not. Noting that X ij X i. X.j + X.. = X ij (X i. X.. ) (X.j X.. ) X.. which is similar to X ij µ ij = X ij α i β j µ. Under both H A and H B are true, we have SS(TO)/σ 2 is χ 2 (ab 1), both SS(A)/σ 2 and SS(B)/σ 2 are chi-square variables, namely, χ 2 (a 1) and χ 2 (b 1), Since SS(E) 0, using Theorem 7.5-1, SS(A), SS(B) and SS(E) are all independent. SS(E) is a chi-square variable with ab 1 (a 1) (b 1) = (a 1)(b 1) degrees of freedom.
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) (I) To test H A : α 1 = α 2 = = α a = 0 Consider the F-statistic: F A = SS(A)/[σ 2 (a 1)] SS(E)/[σ 2 (a 1)(b 1)] = SS(A)/(a 1) SS(E)/[(a 1)(b 1)] which has an F distribution with a 1 and (a 1)(b 1) degrees of freedom when H A is true, H A is rejected if the observed value of F A F α [a 1, (a 1)(b 1)]. (II) To test H B : β 1 = β 2 = β b = 0 against all alternatives, F B = SS(B)/[σ 2 (b 1)] SS(E)/[σ 2 (a 1)(b 1)] = SS(B)/(b 1) SS(E)/[(a 1)(b 1)], which has an F distribution with b 1 and (a 1)(b 1) degrees of freedom, provided that H B is true.
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) (III) Test for interactions between two factors: particular combinations of the 2 factors might interact differently from what is expected from the additive model. Assume that X ijk, i = 1, 2,, a; j = 1, 2,, b; and k = 1, 2,, c, are n = abc random variables that are mutually independent and have normal distributions with a common, but unknown, variance σ 2. The mean of each X ijk, k = 1, 2,, c, is µ ij = µ + α i + β j + γ ij, where a i=1 α i = 0, b j=1 β j = 0, a i=1 γ ij = 0, and b j=1 γ ij = 0. γ ij is called the interaction associated with cell (i, j). To test the hypotheses that (a) the row effects are equal to zero, (b) the column effects are equal to zero, and (c) there is no interaction
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) Using notations X ij. = 1 c X i.. = 1 bc X.j. = 1 ac c X ijk, k=1 X... = 1 abc b j=1 k=1 a i=1 k=1 a c X ijk, c X ijk, b i=1 j=1 k=1 c X ijk,
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) We again have the total sum of squares: SS(TO) = a b i=1 j=1 k=1 = bc c (X ijk X... ) 2 a (X i.. X... ) 2 + ac i=1 + c + a i=1 j=1 a b i=1 j=1 k=1 b (X.j. X... ) 2 j=1 b (X ij. X i.. X.j. + X... ) 2 c (X ijk X ij. ) 2 = SS(A) + SS(B) + SS(AB) + SS(E),
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) Under the null hypothesis, all the means equal to the same value µ, hence SS(TO)/σ 2 is χ 2 (abc 1). SS(A)/σ 2 and SS(B)/σ 2 are χ 2 (a 1) and χ 2 (b 1). Moreover, for each (i, j), we also have c (X ijk X ij. ) 2 k=1 is χ 2 (c 1); therefore, SS(E)/σ 2 is the sum of ab independent chi-square variables such as this and thus is χ 2 [ab(c 1)]. σ 2 Since SS(AB) 0, using Theorem 7.5-1, SS(A)/σ 2, SS(B)/σ 2, SS(AB)/σ 2, and SS(E)/σ 2 are mutually independent chi-square variables with a 1, b 1, (a 1)(b 1), and ab(c 1) degrees of freedom.
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) (i) The statistic for testing the hypothesis H AB : γ ij = 0, i = 1, 2,, a, j = 1, 2,, b, against all alternatives is c a b (X ij. X i.. X.j. + X... ) 2 /[σ 2 (a 1)(b 1)] F AB = = i=1 j=1 a b i=1 j=1 k=1 SS(AB)/[(a 1)(b 1)] SS(E)/[ab(c 1)] c (X ijk X ij. ) 2 /[σ 2 ab(c 1)] which has an F distribution with (a 1)(b 1) and ab(c 1) degrees of freedom when H AB is true. If F AB F α [(a 1)(b 1), ab(c 1)], we reject H AB and say that there is a difference among the means, since there seems to be interaction.,
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) (ii) The statistic for testing the hypothesis against all alternatives is H A : α 1 = α 2 = = α a = 0 F A = a bc a (X i.. X... ) 2 /[σ 2 (a 1)] b i=1 i=1 j=1 k=1 c (X ijk X ij. ) 2 /[σ 2 ab(c 1)] = SS(A)/(a 1) SS(E)/[ab(c 1)], which has an F distribution with a 1 and ab(c 1) degrees of freedom when H A is true.
Section 7.6 Two-Factor Analysis of Variance (2 way ANOVA) (iii) The statistic for testing the hypothesis against all alternatives is H B : β 1 = β 2 = = β b = 0 F B = ac b (X.j. X... ) 2 /[σ 2 (b 1)] a b j=1 i=1 j=1 k=1 c (X ijk X ij. ) 2 /[σ 2 ab(c 1)] = SS(B)/(b 1) SS(E)/ab(c 1)], which has an F distribution with b 1 and ab(c 1) degrees of freedom when H B is true.
Section 7.7 Tests concerning Regression and Correlation Let X and Y have a bivariate normal distribution. Using the sample correlation coefficient to test the hypothesis H 0 : ρ = 0 and also to form a confidence interval for ρ. Let (X 1, Y 1 ), (X 2, Y 2 ),, (X n, Y n ) denote a random sample from a bivariate normal distribution with parameters µ X, µ Y, σx 2, σ2 Y, and ρ. Sample correlation coefficient: R = 1 n 1 n (X i X )(Y i Y ) i=1 = n (X i X ) 2 1 n (Y i Y ) n 1 2 1 n 1 i=1 i=1 S XY S X S Y.
Section 7.7 Tests concerning Regression and Correlation Note that R S Y S X = S XY S 2 X = 1 n (X i X )(Y i Y ) n 1 i=1 1 n (X i X ) n 1 2 is exactly the solution that we obtained for ˆβ in Secton 6.7. If H 0 : ρ = 0 is true, Y 1, Y 2,, Y n are independent of X 1, X 2,, X n, and thus β = ρσ Y /σ X = 0. The conditional distribution of ˆβ, given X 1 = x 1,, X n = x n : i=1 ˆβ = n (x i x)(y i Y ) i=1 n (x i x) 2 i=1 is N[0, σ 2 Y /(n 1)s2 x ] when s 2 x > 0.
Section 7.7 Tests concerning Regression and Correlation Recall from Section 6.7, the conditional distribution of n i=1 [Y i Y (S xy /sx 2 )(x i x)] 2 σy 2 = (n 1)S Y 2 (1 R2 ) σy 2 given that X 1 = x 1,, X n = x n, is χ 2 (n 2) and is independent of ˆβ. When ρ = 0, the conditional distribution of, T = (RS Y /s x )/(σ Y / n 1s x ) = R n 2 (n 1)SY 2 (1 R2 )/σy 2 ][1/(n 2)] 1 R 2 is t with n 2 degrees of freedom. Since the conditional distribution of T given that X 1 = x 1,, X n = x n, does not depend on x 1, x 2,, x n, the unconditional distribution of T must be t with n 2 degrees of freedom, and T and (X 1, X 2,, X n ) are independent when ρ = 0.
Section 7.7 Tests concerning Regression and Correlation (Remark) In the discussion about the distribution of T, nothing was said about the distribution of X 1, X 2,, X n. If X and Y are independent and Y has a normal distribution, then T has a t distribution whatever the distribution of X. The roles of X and Y can be reversed in all of this development. T can be used to test H 0 : ρ = 0; if H 1 : ρ > 0, we would use the critical region defined by the observed T t α (n 2), since large T implies large R. The distribution function and p.d.f. of R when 1 < r < 1, provided that ρ = 0: g(r) = Γ[(n 1)/2] Γ(1/2)Γ[(n 2)/2] (1 r 2 ) (n 4)/2, 1 < r < 1. (See Appendix B Table XI)
Section 7.7 Tests concerning Regression and Correlation (Proof) G(r) = P(R r) = P = r n 2/ 1 r 2 ( T r ) n 2 1 r 2 h(t) dt ( Γ[(n 1)/2] 1 h(t) = 1 + t2 Γ(1/2)Γ[(n 2)/2] n 2 n 2 The derivative of G(r), with respect to r, is ( ) r n 2 d(r n 2/ 1 r g(r) = h 2 ), 1 r 2 dr ) (n 1)/2 To test the hypothesis H 0 : ρ = 0 against the alternative hypothesis H 1 : ρ 0 at a significance level α, select either a constant r α/2 (n 2) or a constant t α/2 (n 2) so that α = P( R r α/2 (n 2); H 0 ) = P( T t α/2 (n 2); H 0 )
Section 7.7 Tests concerning Regression and Correlation To test H 0 : ρ = ρ 0, an approximate test of size α can be obtained by using the fact that W = 1 2 ln 1 + R 1 R has an approximate normal distribution with mean (1/2) ln[(1 + ρ)/(1 ρ)] and variance 1/(n 3) (since R has an asymptotic normal distribution with mean ρ and variance (1 ρ 2 ) 2 n ). A test of H 0 : ρ = ρ 0 can be based on the statistic z = 1 2 ln 1 + R 1 R 1 2 ln 1 + ρ 0 1 ρ 0 1 n 3, which has a distribution that is approximately N(0, 1).
Section 7.7 Tests concerning Regression and Correlation An approximate 100(1 α)% confidence interval for ρ, ( (1/2) ln[(1 + R)/(1 R)] (1/2) ln[(1 + ρ)/(1 ρ)] P c 1/(n 3) c ) 1 α. P ( 1 + R (1 R) exp(2c/ n 3) 1 + R + (1 R) exp(2c/ n 3) ρ 1 + R (1 R) exp( 2c/ n 3) 1 + R + (1 R) exp( 2c/ n 3) ) 1 α.
The end of Chapter 7