MSH3 Generalized linear model
|
|
- Posy Rose
- 5 years ago
- Views:
Transcription
1 Contents MSH3 Generalized linear model 7 Log-Linear Model Equivalence between GOF measures Sampling distribution Interpreting Log-Linear models Collapsing table Decomposable model Incomplete Contingency Table Marginal Homogeneity and Symmetry SydU MSH3 GLM (2015) First semester Dr. J. Chan 230
2 7 Log-Linear Model MSH3 Generalized linear model We have seen that the cell counts in contingency tables can be modelled by Poisson rv s in the disturbed dreams example. A natural GLM is the log-linear model where we model the logarithm of the expected cell frequency (or log probability) by a factorial design type model. 7.1 Equivalence between GOF measures There are two statistics that we used to measure goodness of fit: T ( ) Oi Deviance: D n = 2 O i ln Pearson χ 2 : X 2 n = i=1 E i T (O i E i ) 2 where O i and E i denote the observed and expected frequency for cell i. The two statistics are amptotically equivalent. Both have asymptotic χ 2 distribution. Lemma: Let (X 1, X 2,..., X T ) has a multinomial distribution (n, π), where π = (π 1, π 2,..., π T ), π > 0. Let T ( ) Xi T D n = 2 X i ln and X 2 (X i nπ i ) 2 n =. nπ i nπ i i=1 Then Xn 2 P D n 0 as n. { } Xi Proof: Treating D n as a function of, we have n T [ ] X i D n = 2n n ln Xi /n T = 2n f(y i ) i=1 SydU MSH3 GLM (2015) First semester Dr. J. Chan 231 π i i=1 E i i=1 i=1
3 where Y i = X ( i y ) n and f(y) = y ln. Then π f (y) = ln y π + y 1 ( y ) y = ln + 1, f (y) = 1 π y and f (y) = 1 y. 2 Thus expanding f(y) in a Taylor series around π i, T D n =2n [f(π i )+f (π i )(Y i π i )+ 12 f (π i )(Y i π i ) ] f (θ i )(Y i π i ) 3 =2n 2n 2 i=1 T π i ln i=1 T i=1 ( πi π i = X 2 n + n 3 ) + 2n i=1 T i=1 ( ) 2 Xi n π 1 i + 2n π i 6 T ( Xi n π i ( ) Xi n π i 1 + T i=1 ) 3 1 where θ i lies between X i n and π i. The Chebyshev s Theorem states that ( ) 3 Xi n π 1 i θi 2 For any number k > 1, at least (1 1/k 2 ) of the data values lie within (µ kσ, µ + kσ). Hence for any ɛ > 0, ( ( ) Pr n 1/3 Xi n π i > ɛ) ( =1 Pr π i ɛ n 1/3 σ σ < X i n < π i + ɛ ) n 1/3 σ σ [ ( n 1/3 ) ] σ = 1 (1 n2/3 nσ 2 ) = π i(1 π i ) 0 ɛ nɛ 2 n 1/3 ɛ 2 SydU MSH3 GLM (2015) First semester Dr. J. Chan 232 θ 2 i
4 as n where σ 2 = E ( ) n 1/3 Xi n π i ( Xi ) 2 n π i = 1 n π i(1 π i ). Hence P 0 X i n ( ) i.e. Xi n π i of large number (WLLN) for f(x) states that P π i as n, tends to zero faster than 1 3 n as n. The weak law f(x) o(g(x)) if f(x) lim x g(x) = 0 where f(x) = o(g(x)) means for all c > 0 there exists some k > 0 such that 0 < f(x) < cg(x) for all x k. The value of k must not depend on x, but may depend on c. Thus θ i = π i + o p (n 1/3 ). X 2 n D n = n 3 since T is fixed. T i=1 X i n π i 3 θ 2 i = 1 3 T i=1 n 1/3 ( X i n π i) 3 [π i + o(n 1/3 )] 2 P 0 as n Thus for large samples, the Deviance and Pearsons χ 2 are equivalent however the statistics appear (from simulation studies) to converge to the asymptotic χ 2 distribution at different rates. The distribution of X 2 n is remarkably close to a χ 2 distribution when the expected minimum cell size is only 3-5. When n is small, the p-value for a test based on the χ 2 approximation to the deviance will tend to understate the true values (i.e. we reject more than we should). Typically D n > X 2 n for small n but not always! SydU MSH3 GLM (2015) First semester Dr. J. Chan 233
5 7.2 Sampling distribution Consider a simple 2 by 2 contingency table, Factor 1 Factor 2 Total y 11 y 12 y 1. 2 y 21 y 22 y 2. Total y 1 y 2 y or n There are different sampling distributions depending on the assumptions on y ij. 1. Poisson likelihood All counts y ij are random. We fit a Poisson additive log-linear model with mean ln µ ij = µ + α i + β j + (αβ) ij. The likelihood function, log-likelihood and deviance are Pr(Y = y) i j µ y ij ij e µ ij, y ij! l = y ij ln µ ij µ ij + c and D = 2 ( ) yij y ij ln. ˆµ i j i j ij (1) where c is independent of model parameters. To test for independence with H 0 : (αβ) ij = 0, the ML estimates under H 1 are l µ ij = y ij µ ij 1 = 0 ˆµ ij = y ij. SydU MSH3 GLM (2015) First semester Dr. J. Chan 234
6 Under H 0, l o MSH3 Generalized linear model = i,j y ij ln ( ) µi µ j n i,j µ i µ j n + c l o µ i = j y ij µ i j µ j n = 0 y i µ i n n = 0 ˆµ i = y i since j µ j = n. Similarly, ˆµ j = y j and ˆµ ij = µ i µ j n = y i y j n. 2. Binomial likelihood The column margin y j is fixed. We treat the column factor as a predictor and row factor as a response. Factor 1 Factor 2 Total y 11 p 1 1 y 12 p 1 2 y 1. 2 y 21 1 p 1 1 y 22 1 p 1 2 y 2. Total y 1 1 y 2 1 n If Y ij is Poi(µ ij ), Y j = i Y ij is Poi(µ j ). Condition on the marginal y j, the distribution of Y becomes binomial since = Pr(Y = y Y ij = y j ) i µ y ij /( ij e µ ij µ y 1 y i j ij! ( ) y11 ( µ11 µ21 = y 1! y 11!y 21! µ 1 µ 1 1 e µ 1 y 1! ) y21 y 2! y 12!y 22! ) ( µ y 2 ( µ12 2 e µ 2 µ 2 y 2! ), = y 1! y 11!y 21! py (1 p 1 1) y 21 y 2! y 12!y 22! py (1 p 1 2) y 22. ) y12 ( ) y22 µ22, µ 2 SydU MSH3 GLM (2015) First semester Dr. J. Chan 235
7 Then the log-likelihood function and deviance are l = j D = 2 j [ ( ) ( )] y1j y 2j y 1j ln + y 2j ln y j p 1 j y j (1 p 1 j ) [ y1j ln p 1 j + y 2j ln(1 p 1 j ) ] + c and respectively where p 1 j = Pr(F 1 = 1 F 2 = j) and D is the same as (1) with ˆµ ij = y j p i j under a binomial model. The ML estimates are l p 1 j = y 1j p 1 j y 2j 1 p 1 j = 0 (y 1j + y 2j )p 1 j = y 1j ˆp 1 j = y 1j y j. The test of homogeneity H 0 : p 1 1 = p 1 2 is equivalent to the test of independence H 0 : p ij = p i p j in the additive log-linear model since p ij = p i p j p i j = p ij p j = p i p j p j = p i p 1 1 = p 1 = p Multinomial likelihood The total y.. is fixed. We treat both the row and column factors as responses. The conditional distribution of Y given Y, as the ratio of the joint distribution to the marginal distribution leads to multinomial distribution since Pr(Y = y Y = y ) = µ y ij / ij e µ ij µ y e µ y ij i j ij! y! ( ) y11 ( ) y12 ( ) y21 ( ) y22 y! µ11 µ12 µ21 µ22 = y 11!y 12!y 21!y 22! µ µ µ µ y! = y 11!y 12!y 21!y 22! py py py py SydU MSH3 GLM (2015) First semester Dr. J. Chan 236
8 Then the log-likelihood function and deviance are l = i y ij ln(p ij ) + c and D = 2 j i y ij ln j ( yij nˆp ij respectively where D is the same as (1) with ˆµ ij = np ij (n = y ) under a multinomial model. The ML estimates are l p ij = y ij p ij y 22 p 22 = 0 ˆp ij = p 22y ij y 22 = y ij y, i, j 2 since p 22 = 1 p 11 p 12 p 21 and summing over i, j, p ij = p 22y = 1 p 22 = 1 y 22 y 22 y. Hence the fitted values ˆp ij = y ij i j ˆp i = y i y. Summing over j and i respectively, y and ˆp j = y j y. Testing for independence H 0 : p ij = p i. p.j which implies ˆµ ij = y p ij = y p i p j = y i y j in the multinomial model is equivalent to the testing for goodnessof-fit in the Poisson model with ln µ ij = µ + α i + β j. Example: (Smokers) The 2 2 table is No lung cancer Lung cancer Total Non smokers Smokers Total y ) SydU MSH3 GLM (2015) First semester Dr. J. Chan 237
9 > smoke=c(32,60) > nonsmoke=c(11,3) > total=smoke+nonsmoke > y=cbind(smoke,nonsmoke) > yr=c(nonsmoke,smoke) > smokef=factor(c(0,0,1,1)) > cancerf=factor(c(0,1,0,1)) > cancer=factor(c(1,2)) > glm(y~1, family=binomial)$dev [1] > glm(yr~smokef+cancerf,family=poisson)$dev [1] Hypergeometric likelihood The column, row and overall totals, y i, y j, y, are fixed. Pr(Y = y) = y 1! y 11!y 21! y 2! y 12!y 22! / y! y 1!y 2! which is the probability of y 11 successes out of y 1 for the 1st level of Factor 2 and y 12 successes out of y 2 for the 2nd level of Factor 2 given that there are totally y 1 successes out of y. This is the small samples Fisher s exact test. Factor 1 Factor 2 Total y 11 y 12 y 1. 2 y 21 y 22 y 2. Total y 1 y 2 y SydU MSH3 GLM (2015) First semester Dr. J. Chan 238
10 7.3 Interpreting Log-Linear models We consider the log-linear models for a 3-way table generated using a completely random sample. Suppose we have 3 response variables R, S and T with r, s and t categories, respectively. We only consider hierarchical models which obey the following rule: if the model includes a parameter involving a set of variables S then it also includes all parameters involving any subset of S. Thus we must include appropriate low order interaction if we include a more complex interaction in the model. Let y ijk P(µ ijk ), µ ijk = np ijk and p ijk = Pr(R = i, S = j, T = k). 1. Saturated model ln p ijk or ln µ ijk = µ + α i + β j + γ k + (αβ) ij + (αγ) ik + (βγ) jk + (αβγ) ijk = ln n + µ + α i + β j + γ k + (αβ) ij + (αγ) ik + (βγ) jk + (αβγ) ijk = µ + α i + β j + γ k + (αβ) ij + (αγ) ik + (βγ) jk + (αβγ) ijk with the constraint that the parameter is 0 if any subscript is 1. Thus µ = ln p [ 111 ] [ ] [ ] pi11 p1j1 p11k α i = ln, β j = ln, γ 11k = ln p 111 p 111 p [ ] [ ] 111 [ ] pij1 p 111 pi1k p 111 p1jk p 111 (αβ) ij = ln, (αγ) ik = ln, (βγ) jk = ln p i11 p 1j1 p i11 p 11k p 1j1 p [ ] 11k pijk p i11 p 1j1 p 11k (αβγ) ijk = ln p ij1 p i1k p 1jk p [ ] 111 [ ] pijk p 11k pij1 p 111 = ln ln p i1k p 1jk p i11 p 1j1 = (αβ) (k) ij (αβ) ij where (αβ) (k) ij can be interpreted as the R S interaction measured SydU MSH3 GLM (2015) First semester Dr. J. Chan 239
11 at the k-th level of T. Similarly (αβγ) ijk = (βγ) (i) jk (βγ) jk = (βγ) (j) ik (βγ) ik. Note that ˆµ ijk = y ijk under saturated model. 2. Uniform association model (αβγ) ijk = 0, i, j, k. (i) The interaction between any 2 variables is constant across levels of the third factor since (αβγ) ijk = 0 implies (αβ) (k) ij (αβ) ij (βγ) (i) jk = (βγ) jk (βγ) (j) ik = (βγ) ik. Hence the log of the odds ratio is the same at all levels of R and is simply say (βγ) 22 which measures the association between S and T of relative to level 1. To show this, the log-odds between k = 2 to k = 1 is ( ) pij2 ln = µ + α i + β j + γ 2 + (αβ) ij + (αγ) i2 + (βγ) j2 p ij1 [µ + α i + β j + γ 1 + (αβ) ij + (αγ) i1 + (βγ) j1 ] = γ 2 γ 1 + (αγ) i2 (αγ) i1 + (βγ) j2 (βγ) j1 because all terms involving only i, j and ij cancel out. Consider the difference in log-odds between j = 2 to j = 1, ( ) pi22 /p i21 ln = γ 2 γ 1 + (αγ) i2 (αγ) i1 + (βγ) 22 (βγ) 21 p i12 /p i11 [γ 2 γ 1 + (αγ) i2 (αγ) i1 + (βγ) 12 (βγ) 11 ] = (βγ) 22 (βγ) 21 (βγ) 12 + (βγ) 11 = (βγ) 22 which is independent of i since using 1 as the reference cell, all interaction terms involving level 1 are set to zero. SydU MSH3 GLM (2015) First semester Dr. J. Chan 240
12 (ii) No 3 factor interaction does not mean that we can estimate the R S interaction from the collapsed table {p ij }. Note that [ ] [ ] pij1 p 111 pij p 11 (αβ) ij = ln ln p i11 p 1j1 p i1 p 1j (iii) There is no simple interpretation in terms of independence and we cannot write the structure of the joint probabilities in terms of the two-way margins. Hence the ML estimates cannot be written in closed form and must be calculated using an iterative procedure. 3. Conditional independence a. (αβγ) ijk = (αβ) ij = 0, i, j, k, p ij k = p i k p j k R S T. (αβγ) ijk = (αβ) (k) ij From (4), p i1k = p i k p 11k Hence from (2), (αβ) ij = 0 (αβ) (k) ij = 0 so p ijk p 11k = p i1k p 1jk (2) sum over i p jk p 11k = p 1k p 1jk (3) p 1 k p ijk p 11k = sum over j p i k p 11k = p i1k p 1 k (4) sum over i, j p k p 11k = p 1k p 1 k (5) and from (3) p 1jk = p jk p 11k p 1k. ( ) ( ) pi k p 11k p jk p 11k p 1 k p 1k p 11k p ijk = p i k p jk p 1 k p 1k p ijk = p i k p jk from (5) p k = p 1k p 1 k (6) p k p 11k or p ijk = p i k p jk = p i k p j k p k p k p k Pr(R = i, S = j T = k) = Pr(R = i T = k) Pr(S = j T = k) We have ˆµ ijk = y i k y jk /y k. The odd ratio r between R, S given T is r ij k = p ijk p i+1,j+1,k p i+1,j,k p i,j+1,k = p ij k p k p i+1,j+1 k p k p i+1,j k p k p i,j+1 k p k = p i k p j k p i+1 k p j+1 k p i+1 k p j k p i k p j+1 k = 1. SydU MSH3 GLM (2015) First semester Dr. J. Chan 241
13 b. (αβγ) ijk = (αγ) ik = 0, i, j, k, p ik j = p i j p k j R T S. c. (αβγ) ijk = (βγ) jk = 0, i, j, k, p jk i = p j i p k i S T R. 4. Block independence model a. (αβγ) ijk = (αβ) ij = (αγ) ik = 0, i, j, k, p ijk = p i p jk R (S, T ). From (6), (αβγ) ijk = (αβ) ij = 0 implies p ijk = p i k p jk. p k Similarly (αβγ) ijk = (αγ) ik = 0 implies p ijk = p ij p jk. Hence So p ijk = p i k p jk p k p i k p k p i k p j p j = p ij p jk p j (7) = p ij p j = p ij p k Sum over k p i p j = p ij as p = 1 Substitute into (7): p ijk = p ij p jk p j We have ˆµ ijk = y i y jk /n, n = y. = p i p j p jk p j = p i p jk b. (αβγ) ijk = (αβ) ij = (βγ) jk = 0, i, j, k, p ijk = p j p ik S (R, T ). c. (αβγ) ijk = (αγ) ik = (βγ) jk = 0, i, j, k, p ijk = p k p ij T (R, S). 5. Completely independence model (αβγ) ijk = (αβ) ij = (αγ) ik = (βγ) jk = 0, i, j, k R S T, that is, all 3 variables are mutually independent, H 0 : p ijk = p i p j p k and ˆµ ijk = y i y j y k /n 2. SydU MSH3 GLM (2015) First semester Dr. J. Chan 242
14 Model Constraint/ Estimation df 0 [123] No 0 ˆµ ijk = y ijk 1 [12][13][23] (αβγ) ijk = 0 (I 1)(J 1)(K 1) Use y ij, y i k, y jk, no closed form. 2 [12][13] (αβγ) ijk = (βγ) jk = 0 I(J 1)(K 1) ˆµ ijk = y ij y i k /y i 3 [12][3] (αβγ) ijk = (βγ) jk = (αγ) ik = 0 (K 1)(IJ 1) ˆµ ijk = y k y ij /n 4 [1][2][3] (αβγ) ijk = (βγ) jk = (αγ) ij = (αβ) ij = 0 IJK I J K + 2 ˆµ ijk = y i y j y k /n 2 SydU MSH3 GLM (2015) First semester Dr. J. Chan 243
15 7.4 Collapsing table MSH3 Generalized linear model Partial Tables for any two variables, say R and S at different levels of T, are formed by the 2-way tables of counts for R and S at each fixed level of T. Marginal Tables are obtained from R and S say by combining the partial tables of T fixed at each of its K levels. Notes: 1. Partial tables in (R, S) control for the effect of T. 2. Marginal tables in (R, S) ignore the effect of T. Partial tables can exhibit quite different associations than marginal tables and analysis based on marginal tables can be quite misleading. Suppose we want to draw inferences about the 2-factor interaction terms, (αβ) ij in a 3 factors log-linear model. If we collapse the 3-way table over T, yielding the 2-dimensional marginal table of {y ij }, would the twofactor interaction terms (αβ) ij for this marginal table the same as (αβ) ij from the 3 factors log-linear model? No. No 3 factor interactions ((αβγ) ijk = 0) was not sufficient to allow the 3 way table to be collapsed into 2-way tables and analysed in that way. However if we have conditional independence then we can collapse the table. Recall the partial correlation between variable 1 and 2, controlling for 3 ρ 12 3 = ρ 12 ρ 13 ρ 23 (1 ρ 2 13 )(1 ρ 2 23 ). If ρ 13 = 0 or ρ 23 = 0 or both, ρ 12 3 is a scalar multiple of ρ 12. SydU MSH3 GLM (2015) First semester Dr. J. Chan 244
16 If the partial (conditional) odds ratios are equal to the marginal odds ratios for all levels of T then the (R, S) association can be measured simply by collapsing over the T dimension. Collapsibility. If either (αγ) ik = 0, or (βγ) jk = 0, or both. That is R T S or S T R or (R, s) T. Lemma: If R T S then we can estimate the R, S interactions from the R, S marginal table. Proof: R T S p ijk = p ij p jk p j. Then [ ] pij1 p 111 (αβ) ij = ln = ln p i11 p 1j1 [ pij p j1 p 11 p 11 p j p 1 p 1j p j1 p i1 p 11 p 1 p j ] [ ] pij p 11 = ln = (αβ) ij p i1 p ij Similar analyses can be developed when we have 2 response variables and a conditioning variable, e.g. p ijk = Pr(R = i, S = j T = k), i = 1,..., r; j = 1,..., s; k = 1,..., t Note: p k = 1 for k = 1,..., t in this case as we draw samples for each level of T and so observe the conditional probabilities. To use loglinear models in these situations, we must include all main effect terms corresponding to the conditioning variable T. Then for each pair (i, j), 1 i I 1, 1 j J 1, the odds ratio r ij = r ij k, k = 1, 2,..., K. Theorem: In a 3-way table, the interaction between 2 variables may be measured by collapsing the table (marginal table) over the third variable if the third variable is independent of at least one of the two variables exhibiting the interaction. SydU MSH3 GLM (2015) First semester Dr. J. Chan 245
17 Note: MSH3 Generalized linear model 1. This implies R T S (R and T being conditionally independent) or S T R (S and T being conditionally independent). When R and T are conditionally independent ((αγ) ik = 0), ln(y ijk ) = µ + α i + β j + γ k + (αβ) ij + (βγ) jk. Now fix T at same level k, ln(r ij k ) = ln(y ijk ) + ln(y i+1,j+1,k ) ln(y i+1,j,k ) ln(y i,j+1,k ) = µ + α i + β j + γ k + (αβ) ij + (βγ) jk + µ + α i+1 + β j+1 + γ k + (αβ) i+1,j+1 + (βγ) j+1,k [µ + α i+1 + β j + γ k + (αβ) i+1,j + (βγ) jk ] [µ + α i + β j+1 + γ k + (αβ) i,j+1 + (βγ) j+1,k ] = (αβ) ij + (αβ) i+1,j+1 (αβ) i+1,j (αβ) i,j+1 which is independent of k. Also y ijk = exp(µ + α i + β j + (αβ) ij ) exp(γ k + (βγ) jk ) y ij = exp(µ + α i + β j + (αβ) ij ) exp(γ k + (βγ) jk ) k { } ln(y ij ) = µ + α i + β j + (αβ) ij + ln exp[γ k + (βγ) jk ] Thus ln(r ij ) = ln(y ij ) + ln(y i+1,j+1, ) ln(y i+1,j, ) ln(y i,j+1, ) = (αβ) ij + (αβ) i+1,j+1 (αβ) i+1,j (αβ) i,j+1. since the constant ln { k exp[γ k + (βγ) jk ]} does not involve subscript of both i and j. Hence r ij k = r ij as required. 2. The converse of the theorem is not always true: it is possible to construct a 3-way table with SydU MSH3 GLM (2015) First semester Dr. J. Chan 246 k
18 (a) one dimension having at least 2 categories, (b) (αβγ) ijk = 0 and (c) the table is collapsible over T but (αβ) ij, (αγ) ik and (βγ) jk are all non-zero. SydU MSH3 GLM (2015) First semester Dr. J. Chan 247
19 7.5 Decomposable model For the 3-way table, all models except for the case (αβγ) ijk = 0, i, j, k has an interpretation in terms of marginal probabilities. Models that have such an interpretation are called decomposable. They are the models where we can write p ijk in terms of marginal probabilities. For a general log-linear model, the likelihood and log-likelihood are L = [ ] exp( µij )µ ij ij y y i j ij! l = ( µ ij + ln y ij!) + y ij ln(µ ij ) i j i j so the sufficient statistics for {(βγλ) jkl } ([S, T, U]) say are y ij sum over all except j, k, l, i.e. {y jkl }, the S, T, U marginal totals. If we know the S, T, U marginal table then we can recover the S, T marginal table, the S, U marginal table, etc. The smallest number of marginal tables required to obtain only the sufficient statistics for a given model is called the set of sufficient configurations. Example: The 4-way table with only R S T 3-factor interactions and all 2 factor interactions present has sufficient configurations [RST ], [RU], [SU], [T U] or [123], [14], [24], [34] in some books since [RST ] contains [RS], [ST ], [RT ]. If the model is decomposable then p ijkl say is the product of the sufficient marginal probabilities divided by marginal probabilities corresponding to any repeated sets of variables. For example, p ijkl = p ij p i k p j l p i p j (8) SydU MSH3 GLM (2015) First semester Dr. J. Chan 248
20 To determine if a model is decomposable, use the following rules: 1. Any variables always appearing together are treated as one; 2. Delete any variable that is in each configuration; 3. Delete any variable that is in only one configuration; 4. Remove any redundant configuration; 5. Continue until (a) there are only 2 configurations which implies decomposable or (b) cannot proceed further which impies indecomposable. An alternative graphical approach is given in Darach, Laurtzen and Speed (1980), Am. Statist., 8, If a model is decomposable then the cell frequencies can be estimated directly from the marginal tables. All models corresponding to independence or conditional independence are decomposable. Example: The 4-way table with sufficient configurations [RST ], [RU], [SU], [T U] is indecomposable. Since no variables always appear together, no variable appear in each configuration, no variable in only 1 configuration, no redundant configuration. Example: The 4-way table with sufficient configurations Steps: [RS] [RT ] [SU] 1. Remove U (or T ) that appears only in 1 configuration: [RS] [RT ] [S] 2. S is redundant: [RS] [RT ] which implies decomposable. SydU MSH3 GLM (2015) First semester Dr. J. Chan 249
21 Then probabilities p ijkl are given by (8). Example: 5-way model with sufficient configurations [RSV ] [RU] [T U] Steps: 1. Remove SV that appears only in 1 configuration: [R] [RU] [T U] 2. R is redundant: [RU] [T U] implies decomposable. p ijklm = p ij m p i l p kl p i p l SydU MSH3 GLM (2015) First semester Dr. J. Chan 250
22 7.6 Incomplete Contingency Table There are two types of 0 entries in contingency tables, fixed zeros and sampling zeros. 1. Fixed zeros: They refer to impossible variable combination, e.g. female prostate cancer patients. In these cases we remove the cells from the table and model the incomplete table. Example: (2-way table) let S be the set of cells remaining in an r s table after excluding fixed zeros. Define the model ln p ijk = µ + α i + β j + (αβ) ij, (i, j) S The model corresponding to (αβ) ij = 0, (i, j) S is the model of quasi-independence. If there are a cells which are fixed zeros then the deviance corresponding to a model of quasi-independence has (r 1)(s 1) a df [ 1 for µ, (r 1) for α i s, (s 1) for β j s and there are (rs a) cells.] Example: (Health concern data) The following results were obtained from a survey of teenagers (n = 14) regarding their health concerns (Brunswick 1971), cross-classified by sex, age and health concerns: Health Male Female Concerns Sex, reproduction How healthy I am Nothing Menstrual problems > y=c(4,2,9,7,42,7,19,10,57,20,71,31,4,8) > H=factor(c(1,1,1,1,2,2,2,2,3,3,3,3,4,4)) SydU MSH3 GLM (2015) First semester Dr. J. Chan 251
23 > A=factor(c(1,2,1,2,1,2,1,2,1,2,1,2,1,2)) > S=factor(c(1,1,2,2,1,1,2,2,1,1,2,2,2,2)) > d0=glm(y~h*a*s,family=poisson)$dev > d1=glm(y~h*a+h*s+a*s,family=poisson)$dev > d2=glm(y~h*a+h*s,family=poisson)$dev > d3=glm(y~h*a+a*s,family=poisson)$dev > d4=glm(y~h*s+a*s,family=poisson)$dev > d5=glm(y~a*s+h,family=poisson)$dev > d6=glm(y~h*s+a,family=poisson)$dev > d7=glm(y~h*a+s,family=poisson)$dev > d8=glm(y~h+s+a,family=poisson)$dev > c(d0,d1,d2,d3,d4,d5,d6,d7,d8) [1] e e e e e+00 [6] e e e e+01 The model fits are Model interpret D df p-value 0. [HAS] [HA][HS][AS] (4-1)(2-1)(2-1) [HA][HS] A S H (2-1)(2-1) [HA][AS] H S A (4-1)2(2-1) [HS][AS] H A S (4-1)(2-1) [AS][H] (A, S) H (4-1)(2 2-1) [HS][A] (H, S) A (2-1)(4 2-1) [HA][S] (H, A) S (2-1)(4 2-1) [H][A][S] H A S Null The fits of the models in which (αγ) ik = 0 (without [HS] ) is quite poor. Models 1, 2 and 4 are acceptable at the 0.05 level of significance. Model 2 is the chosen model. For those models with [HS], there is one par. less and hence the d.f. adjustment is -2+1=-1. > glm2=glm(y~h*a + H*S,family=poisson) #the chosen model > summary(glm2) SydU MSH3 GLM (2015) First semester Dr. J. Chan 252
24 Call: glm(formula = y ~ H * A + H * S, family = poisson) Deviance Residuals: Coefficients: (1 not defined because of singularities) Estimate Std. Error z value Pr(> z ) (Intercept) ** H e-07 *** H e-09 *** H A S * H2:A H3:A H4:A H2:S ** H3:S H4:S2 NA NA NA NA --- Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for poisson family taken to be 1) Null deviance: on 13 degrees of freedom Residual deviance: on 3 degrees of freedom AIC: Number of Fisher Scoring iterations: 4 The interpretation is that given a particular health concern (other than menstrual problems), there is no relationship between the age and sex of individuals with that concern. SydU MSH3 GLM (2015) First semester Dr. J. Chan 253
25 2. Sampling zeros: They occur when the probability associated with the cell is non-zero but the sample size was not large encough to observe an entry. If a small number of cells in a contingency table have 0 entries then R will still work and it will produce maximum likelihood estimates for the expected cell frequencies under the given model. Example: A 1 A 2 B 1 B 2 B 1 B 2 C C Model: [AB] [AC] [BC], ŷ 221 = > y1=c(9,6,5,0,8,5,7,16) > a=factor(c(1,1,2,2,1,1,2,2)) > b=factor(c(1,2,1,2,1,2,1,2)) > c=factor(c(1,1,1,1,2,2,2,2)) > glm1=glm(y1~a*b+a*c+b*c,family=poisson) > summary(glm1) Call: glm(formula = y1 ~ a * b + a * c + b * c, family = poisson) Deviance Residuals: Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) e-16 *** SydU MSH3 GLM (2015) First semester Dr. J. Chan 254
26 a * b c a2:b a2:c * b2:c Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for poisson family taken to be 1) Null deviance: on 7 degrees of freedom Residual deviance: on 1 degrees of freedom AIC: Number of Fisher Scoring iterations: 6 > glm1$fitted There are problems fitting a model when one of the marginal totals in one of the sufficient configurations is zero. If the marginal total is 0 then all cells contributing to that total must have 0 entries and so we need to adjust the degrees of freedom when assessing the model fit. This adjustment is NOT performed in R. A general formula for calculating the df is df = (N N 0 ) (P P 0 ) SydU MSH3 GLM (2015) First semester Dr. J. Chan 255
27 where N = # cells in the table Example: P = # parameters fitted by the model N 0 = # cells with 0 expected frequency P 0 = # parameters that cannot be estimated because of 0 marginal values. A 1 A 2 B 1 B 2 B 1 B 2 C C Model: [AB] [AC] [BC], ln p ijk = µ + α i + β j + γ k + (αβ) ij + (αγ) ik + (βγ) jk, i, j, k = 1, 2 Deviance for this model is 0 at 1 df. Y 22 = 0 implies Y 221 = Y 222 = 0. ( ) ( ) p221 p (αβ) 22 = ln ln p 211 p and so is estimated to be large negative. For the degree of freedom, N = 8, N 0 = 2, P = 7, P 0 = 1 df = (8 2) (7 1) = 0 instead of 1. > y2=c(9,6,5,0,8,5,7,0) > glm2=glm(y2~a*b+a*c+b*c,family=poisson) > summary(glm2) Call: glm(formula = y2 ~ a * b + a * c + b * c, family = poisson) SydU MSH3 GLM (2015) First semester Dr. J. Chan 256
28 Deviance Residuals: e e e e e e e e-05 Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) 2.197e e e-11 *** a e e b e e c e e a2:b e e a2:c e e b2:c e e Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for poisson family taken to be 1) Null deviance: e+01 on 7 degrees of freedom Residual deviance: e-10 on 1 degrees of freedom AIC: Number of Fisher Scoring iterations: 21 > glm2$fitted e e e e e e e e-11 The calculations are ŷ 111 = 9 = exp(2.197), ŷ 121 = 6 = exp( ), ŷ 211 = 5 = exp( ), ŷ 221 = 0 = exp( ), ŷ 112 = 8 = exp( ), ŷ 122 = 5 = exp( ), ŷ 212 = 7 = exp( ), ŷ 222 = 0 = exp( ) SydU MSH3 GLM (2015) First semester Dr. J. Chan 257
29 7.7 Marginal Homogeneity and Symmetry Not all log-linear models for contingency tables are hierarchical. We may have contraints on the model parameters. 1. Symmetry If we have a square r r table, some natural hypotheses relate to symmetry, e.g. the strength of left and right eyes data, are Pr(R = i, S = j) = Pr(R = j, S = i) or p ij = p ji, i j. Under this model, the ML estimates for the expected cell frequencies are ˆµ ij = ˆµ ji = 1 2 (y ij + y ji ). The test statistic to measure the model fit is n n ( ) 2yij D = 2 y ij ln = 2 y i=1 j=1 ij + y ji i j y ij ln ( 2yij y ij + y ji i.e. the diagonal cells are irrelevant as it gives no information regarding the difference between effects of left and right eyes. Then we compare D with χ 2 as 1 1 r(r 1) parameters have been constrained out of r(r 1) (excluding diagonals) 2 r(r 1) 2 cells. 2. Marginal homogeneity A weaker constraint on the table is the marginal homogeneity: p i = p i, i = 1,..., r. Clearly symmetry implies marginal homogeneity, i.e. p i = p i. 3. Quasi-symmetry A table is quasi-symmetry if (αβ) ij = (αβ) ji, i, j, i.e. ( ) ( ) pij p 11 pji p 11 (αβ) ij = ln = ln = (αβ) ji, i, j. (9) p i1 p 1j p j1 p 1i SydU MSH3 GLM (2015) First semester Dr. J. Chan 258 )
30 If the table is symmetric, it also has quasi-symmetry: We can show that symmetry holds iff we have marginal homogeneity and quasi-symmetry. Lemma: For a r r table, p ij = p ji, i, j p i = p i and (αβ) ij = (αβ) ji, i, j. Proof: is obvious from summing p ij = p ji over j, i.e. p i = p i, so marginal homogeneity holds. From (9), (αβ) ij = (αβ) ji so quasisymmetry holds. Assume p i = p i and (αβ) ij = (αβ) ji, i, j. We have ln p ij = µ + α i + β j + (αβ) ij, α 1 = β 1 = 0, (αβ) i1 = (αβ) 1j = 0. We want to prove p ij = p ji α i + β j = α j + β i. Consider p i = p i r exp[µ + α i + β j + (αβ) ij ] = j=1 exp(α i ) = exp(β i ) r j=1 r exp[µ + α j + β i + (αβ) ji ] j=1 / r exp[α j + (αβ) ji ] j=1 exp[β j + (αβ) ij ] exp(α i ) = exp(β i ) h i. (10) Now h i = = r exp[α j + (αβ) ji ] j=1 r exp[β j + (αβ) ij ] j=1 = r exp(β j ) exp[(αβ) ji ]h j j=1 r exp[β j + (αβ) ji ] j=1 r exp(α j ) exp[(αβ) ji ] j=1 r exp[β j + (αβ) ij ] SydU MSH3 GLM (2015) First semester Dr. J. Chan 259 j=1
31 = = MSH3 Generalized linear model r exp[β j + (αβ) ji ]h j r as exp(α j ) = exp(β j )h j from (9) exp[β j + (αβ) ji ] j=1 j=1 r w ij h j, j=1 where w ij = exp[β j + (αβ) ji ] r. exp[β j + (αβ) ji ] j=1 Now 0 < w ij < 1 and matrix and h = W h. r j=1 w ij = 1. Thus W = (w ij ) is a stochastic W is a possible transition matrix of an irreducible r-state Markov chain so W n W where w (n) ij w ij. h = W n h = W h i.e. h i = j w ijh j = h, say where h is the equilibrium distribution of any state which should be all the same. Thus α i = β i + ln h α i + β j = β i + ln h + β j = α j + β i since α j = β j + ln h and so p ij = exp(µ + α i + β j + (αβ) ij ) = exp(µ + α j + β i + (αβ) ji ) = p ji. SydU MSH3 GLM (2015) First semester Dr. J. Chan 260
32 Example: (British election) The data give counts of votes for those who stayed in the same electorate and had the same number (3) of candidates at each of the 1964 and 1966 elections Cons. Labour Liberal Abstain Cons Labour Liberal Abstain The large diagonal entries indicate that an independence model will not be appropriate. (D = at 9 df.) > y=c(157,4,17,9,16,159,13,9,11,9,51,1,18,12,11,15) > a=factor(c(1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4)) > b=factor(c(1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4)) > summary(glm(y~a+b,family=poisson)) Call: glm(formula = y ~ a + b, family = poisson) Deviance Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) < 2e-16 *** a a e-12 *** a e-15 *** b b e-10 *** b < 2e-16 *** --- SydU MSH3 GLM (2015) First semester Dr. J. Chan 261
33 Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for poisson family taken to be 1) Null deviance: on 15 degrees of freedom Residual deviance: on 9 degrees of freedom AIC: Number of Fisher Scoring iterations: 6 By considering only the off diagonal terms, we can test the hypotheses of quasi-independence, i.e. if a panel member decided in 1966 to change his vote then his choice was unaffected by his 1964 vote. The deviance for this model is D = at 5 df (p-value=0.031)and χ 2 = (pvalue=0.061). The model is marginal significant and hence is marginally quasi-independent. > yr=c(4,17,9,16,13,9,11,9,1,18,12,11) #excluding diagonals > ar=factor(c(1,1,1,2,2,2,3,3,3,4,4,4)) #1966 > br=factor(c(2,3,4,1,3,4,1,2,4,1,2,3)) #1964 > summary(glm(yr~ar+br,family=poisson)) #independent model Call: glm(formula = yr ~ ar + br, family = poisson) Deviance Residuals: Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) <2e-16 *** ar ar SydU MSH3 GLM (2015) First semester Dr. J. Chan 262
34 ar br * br br ** --- Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for poisson family taken to be 1) Null deviance: on 11 degrees of freedom Residual deviance: on 5 degrees of freedom AIC: Number of Fisher Scoring iterations: 4 > yh=glm(yr~ar+br,family=poisson)$fitted > chi2=sum((yr-yh)^2/yh) > chi2 [1] SydU MSH3 GLM (2015) First semester Dr. J. Chan 263
MSH3 Generalized linear model
Contents MSH3 Generalized linear model 5 Logit Models for Binary Data 173 5.1 The Bernoulli and binomial distributions......... 173 5.1.1 Mean, variance and higher order moments.... 173 5.1.2 Normal limit....................
More informationLoglinear models. STAT 526 Professor Olga Vitek
Loglinear models STAT 526 Professor Olga Vitek April 19, 2011 8 Can Use Poisson Likelihood To Model Both Poisson and Multinomial Counts 8-1 Recall: Poisson Distribution Probability distribution: Y - number
More informationMSH3 Generalized linear model Ch. 6 Count data models
Contents MSH3 Generalized linear model Ch. 6 Count data models 6 Count data model 208 6.1 Introduction: The Children Ever Born Data....... 208 6.2 The Poisson Distribution................. 210 6.3 Log-Linear
More informationStat 315c: Transposable Data Rasch model and friends
Stat 315c: Transposable Data Rasch model and friends Art B. Owen Stanford Statistics Art B. Owen (Stanford Statistics) Rasch and friends 1 / 14 Categorical data analysis Anova has a problem with too much
More informationLog-linear Models for Contingency Tables
Log-linear Models for Contingency Tables Statistics 149 Spring 2006 Copyright 2006 by Mark E. Irwin Log-linear Models for Two-way Contingency Tables Example: Business Administration Majors and Gender A
More informationSTA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).
STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis 1. Indicate whether each of the following is true (T) or false (F). (a) (b) (c) (d) (e) In 2 2 tables, statistical independence is equivalent
More informationSTA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).
STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis 1. Indicate whether each of the following is true (T) or false (F). (a) T In 2 2 tables, statistical independence is equivalent to a population
More informationSCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models
SCHOOL OF MATHEMATICS AND STATISTICS Linear and Generalised Linear Models Autumn Semester 2017 18 2 hours Attempt all the questions. The allocation of marks is shown in brackets. RESTRICTED OPEN BOOK EXAMINATION
More informationVarious Issues in Fitting Contingency Tables
Various Issues in Fitting Contingency Tables Statistics 149 Spring 2006 Copyright 2006 by Mark E. Irwin Complete Tables with Zero Entries In contingency tables, it is possible to have zero entries in a
More informationLinear Regression Models P8111
Linear Regression Models P8111 Lecture 25 Jeff Goldsmith April 26, 2016 1 of 37 Today s Lecture Logistic regression / GLMs Model framework Interpretation Estimation 2 of 37 Linear regression Course started
More informationSTAT 526 Spring Midterm 1. Wednesday February 2, 2011
STAT 526 Spring 2011 Midterm 1 Wednesday February 2, 2011 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points
More informationLogistic Regressions. Stat 430
Logistic Regressions Stat 430 Final Project Final Project is, again, team based You will decide on a project - only constraint is: you are supposed to use techniques for a solution that are related to
More information9 Generalized Linear Models
9 Generalized Linear Models The Generalized Linear Model (GLM) is a model which has been built to include a wide range of different models you already know, e.g. ANOVA and multiple linear regression models
More informationTesting Independence
Testing Independence Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM 1/50 Testing Independence Previously, we looked at RR = OR = 1
More informationNATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: )
NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) Categorical Data Analysis (Semester II: 2010 2011) April/May, 2011 Time Allowed : 2 Hours Matriculation No: Seat No: Grade Table Question 1 2 3
More informationUNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator
UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS Duration - 3 hours Aids Allowed: Calculator LAST NAME: FIRST NAME: STUDENT NUMBER: There are 27 pages
More informationCorrespondence Analysis
Correspondence Analysis Q: when independence of a 2-way contingency table is rejected, how to know where the dependence is coming from? The interaction terms in a GLM contain dependence information; however,
More informationStatistics 3858 : Contingency Tables
Statistics 3858 : Contingency Tables 1 Introduction Before proceeding with this topic the student should review generalized likelihood ratios ΛX) for multinomial distributions, its relation to Pearson
More informationLogistic Regression. James H. Steiger. Department of Psychology and Human Development Vanderbilt University
Logistic Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) Logistic Regression 1 / 38 Logistic Regression 1 Introduction
More informationSolution to Tutorial 7
1. (a) We first fit the independence model ST3241 Categorical Data Analysis I Semester II, 2012-2013 Solution to Tutorial 7 log µ ij = λ + λ X i + λ Y j, i = 1, 2, j = 1, 2. The parameter estimates are
More informationBMI 541/699 Lecture 22
BMI 541/699 Lecture 22 Where we are: 1. Introduction and Experimental Design 2. Exploratory Data Analysis 3. Probability 4. T-based methods for continous variables 5. Power and sample size for t-based
More informationSection 4.6 Simple Linear Regression
Section 4.6 Simple Linear Regression Objectives ˆ Basic philosophy of SLR and the regression assumptions ˆ Point & interval estimation of the model parameters, and how to make predictions ˆ Point and interval
More informationA Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46
A Generalized Linear Model for Binomial Response Data Copyright c 2017 Dan Nettleton (Iowa State University) Statistics 510 1 / 46 Now suppose that instead of a Bernoulli response, we have a binomial response
More informationHomework 10 - Solution
STAT 526 - Spring 2011 Homework 10 - Solution Olga Vitek Each part of the problems 5 points 1. Faraway Ch. 4 problem 1 (page 93) : The dataset parstum contains cross-classified data on marijuana usage
More informationModeling Overdispersion
James H. Steiger Department of Psychology and Human Development Vanderbilt University Regression Modeling, 2009 1 Introduction 2 Introduction In this lecture we discuss the problem of overdispersion in
More informationLogistic Regression - problem 6.14
Logistic Regression - problem 6.14 Let x 1, x 2,, x m be given values of an input variable x and let Y 1,, Y m be independent binomial random variables whose distributions depend on the corresponding values
More informationNATIONAL UNIVERSITY OF SINGAPORE EXAMINATION. ST3241 Categorical Data Analysis. (Semester II: ) April/May, 2011 Time Allowed : 2 Hours
NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION Categorical Data Analysis (Semester II: 2010 2011) April/May, 2011 Time Allowed : 2 Hours Matriculation No: Seat No: Grade Table Question 1 2 3 4 5 6 Full marks
More information4.5.1 The use of 2 log Λ when θ is scalar
4.5. ASYMPTOTIC FORM OF THE G.L.R.T. 97 4.5.1 The use of 2 log Λ when θ is scalar Suppose we wish to test the hypothesis NH : θ = θ where θ is a given value against the alternative AH : θ θ on the basis
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationGeneralized Linear Models. Last time: Background & motivation for moving beyond linear
Generalized Linear Models Last time: Background & motivation for moving beyond linear regression - non-normal/non-linear cases, binary, categorical data Today s class: 1. Examples of count and ordered
More informationSTAT 526 Advanced Statistical Methodology
STAT 526 Advanced Statistical Methodology Fall 2017 Lecture Note 7 Contingency Table 0-0 Outline Introduction to Contingency Tables Testing Independence in Two-Way Contingency Tables Modeling Ordinal Associations
More informationDiscrete Multivariate Statistics
Discrete Multivariate Statistics Univariate Discrete Random variables Let X be a discrete random variable which, in this module, will be assumed to take a finite number of t different values which are
More information8 Nominal and Ordinal Logistic Regression
8 Nominal and Ordinal Logistic Regression 8.1 Introduction If the response variable is categorical, with more then two categories, then there are two options for generalized linear models. One relies on
More informationLecture 14: Introduction to Poisson Regression
Lecture 14: Introduction to Poisson Regression Ani Manichaikul amanicha@jhsph.edu 8 May 2007 1 / 52 Overview Modelling counts Contingency tables Poisson regression models 2 / 52 Modelling counts I Why
More informationModelling counts. Lecture 14: Introduction to Poisson Regression. Overview
Modelling counts I Lecture 14: Introduction to Poisson Regression Ani Manichaikul amanicha@jhsph.edu Why count data? Number of traffic accidents per day Mortality counts in a given neighborhood, per week
More informationReview. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 770: Categorical Data Analysis
Review Timothy Hanson Department of Statistics, University of South Carolina Stat 770: Categorical Data Analysis 1 / 22 Chapter 1: background Nominal, ordinal, interval data. Distributions: Poisson, binomial,
More informationConfidence Intervals, Testing and ANOVA Summary
Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0
More informationClassification. Chapter Introduction. 6.2 The Bayes classifier
Chapter 6 Classification 6.1 Introduction Often encountered in applications is the situation where the response variable Y takes values in a finite set of labels. For example, the response Y could encode
More informationSurvival Analysis I (CHL5209H)
Survival Analysis Dalla Lana School of Public Health University of Toronto olli.saarela@utoronto.ca January 7, 2015 31-1 Literature Clayton D & Hills M (1993): Statistical Models in Epidemiology. Not really
More informationSTA 450/4000 S: January
STA 450/4000 S: January 6 005 Notes Friday tutorial on R programming reminder office hours on - F; -4 R The book Modern Applied Statistics with S by Venables and Ripley is very useful. Make sure you have
More information12 Modelling Binomial Response Data
c 2005, Anthony C. Brooms Statistical Modelling and Data Analysis 12 Modelling Binomial Response Data 12.1 Examples of Binary Response Data Binary response data arise when an observation on an individual
More informationij i j m ij n ij m ij n i j Suppose we denote the row variable by X and the column variable by Y ; We can then re-write the above expression as
page1 Loglinear Models Loglinear models are a way to describe association and interaction patterns among categorical variables. They are commonly used to model cell counts in contingency tables. These
More informationCategorical Variables and Contingency Tables: Description and Inference
Categorical Variables and Contingency Tables: Description and Inference STAT 526 Professor Olga Vitek March 3, 2011 Reading: Agresti Ch. 1, 2 and 3 Faraway Ch. 4 3 Univariate Binomial and Multinomial Measurements
More informationGeneralized Linear Models. stat 557 Heike Hofmann
Generalized Linear Models stat 557 Heike Hofmann Outline Intro to GLM Exponential Family Likelihood Equations GLM for Binomial Response Generalized Linear Models Three components: random, systematic, link
More informationMatched Pair Data. Stat 557 Heike Hofmann
Matched Pair Data Stat 557 Heike Hofmann Outline Marginal Homogeneity - review Binary Response with covariates Ordinal response Symmetric Models Subject-specific vs Marginal Model conditional logistic
More informationUNIVERSITY OF TORONTO Faculty of Arts and Science
UNIVERSITY OF TORONTO Faculty of Arts and Science December 2013 Final Examination STA442H1F/2101HF Methods of Applied Statistics Jerry Brunner Duration - 3 hours Aids: Calculator Model(s): Any calculator
More informationStatistical Methods III Statistics 212. Problem Set 2 - Answer Key
Statistical Methods III Statistics 212 Problem Set 2 - Answer Key 1. (Analysis to be turned in and discussed on Tuesday, April 24th) The data for this problem are taken from long-term followup of 1423
More informationPoisson Regression. James H. Steiger. Department of Psychology and Human Development Vanderbilt University
Poisson Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) Poisson Regression 1 / 49 Poisson Regression 1 Introduction
More informationToday. HW 1: due February 4, pm. Aspects of Design CD Chapter 2. Continue with Chapter 2 of ELM. In the News:
Today HW 1: due February 4, 11.59 pm. Aspects of Design CD Chapter 2 Continue with Chapter 2 of ELM In the News: STA 2201: Applied Statistics II January 14, 2015 1/35 Recap: data on proportions data: y
More informationRegression models. Generalized linear models in R. Normal regression models are not always appropriate. Generalized linear models. Examples.
Regression models Generalized linear models in R Dr Peter K Dunn http://www.usq.edu.au Department of Mathematics and Computing University of Southern Queensland ASC, July 00 The usual linear regression
More informationChapter 22: Log-linear regression for Poisson counts
Chapter 22: Log-linear regression for Poisson counts Exposure to ionizing radiation is recognized as a cancer risk. In the United States, EPA sets guidelines specifying upper limits on the amount of exposure
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationAdministration. Homework 1 on web page, due Feb 11 NSERC summer undergraduate award applications due Feb 5 Some helpful books
STA 44/04 Jan 6, 00 / 5 Administration Homework on web page, due Feb NSERC summer undergraduate award applications due Feb 5 Some helpful books STA 44/04 Jan 6, 00... administration / 5 STA 44/04 Jan 6,
More informationMultinomial Logistic Regression Models
Stat 544, Lecture 19 1 Multinomial Logistic Regression Models Polytomous responses. Logistic regression can be extended to handle responses that are polytomous, i.e. taking r>2 categories. (Note: The word
More informationSTATISTICS SYLLABUS UNIT I
STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution
More informationFigure 36: Respiratory infection versus time for the first 49 children.
y BINARY DATA MODELS We devote an entire chapter to binary data since such data are challenging, both in terms of modeling the dependence, and parameter interpretation. We again consider mixed effects
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models Generalized Linear Models - part II Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs.
More informationA Handbook of Statistical Analyses Using R. Brian S. Everitt and Torsten Hothorn
A Handbook of Statistical Analyses Using R Brian S. Everitt and Torsten Hothorn CHAPTER 6 Logistic Regression and Generalised Linear Models: Blood Screening, Women s Role in Society, and Colonic Polyps
More informationLogistic Regression 21/05
Logistic Regression 21/05 Recall that we are trying to solve a classification problem in which features x i can be continuous or discrete (coded as 0/1) and the response y is discrete (0/1). Logistic regression
More informationTwo Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 26 May :00 16:00
Two Hours MATH38052 Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER GENERALISED LINEAR MODELS 26 May 2016 14:00 16:00 Answer ALL TWO questions in Section
More informationIntroduction to the Generalized Linear Model: Logistic regression and Poisson regression
Introduction to the Generalized Linear Model: Logistic regression and Poisson regression Statistical modelling: Theory and practice Gilles Guillot gigu@dtu.dk November 4, 2013 Gilles Guillot (gigu@dtu.dk)
More informationRegression Methods for Survey Data
Regression Methods for Survey Data Professor Ron Fricker! Naval Postgraduate School! Monterey, California! 3/26/13 Reading:! Lohr chapter 11! 1 Goals for this Lecture! Linear regression! Review of linear
More informationSTAT 526 Advanced Statistical Methodology
STAT 526 Advanced Statistical Methodology Fall 2017 Lecture Note 10 Analyzing Clustered/Repeated Categorical Data 0-0 Outline Clustered/Repeated Categorical Data Generalized Linear Mixed Models Generalized
More informationSTA102 Class Notes Chapter Logistic Regression
STA0 Class Notes Chapter 0 0. Logistic Regression We continue to study the relationship between a response variable and one or more eplanatory variables. For SLR and MLR (Chapters 8 and 9), our response
More informationAnalysis of data in square contingency tables
Analysis of data in square contingency tables Iva Pecáková Let s suppose two dependent samples: the response of the nth subject in the second sample relates to the response of the nth subject in the first
More informationStatistics of Contingency Tables - Extension to I x J. stat 557 Heike Hofmann
Statistics of Contingency Tables - Extension to I x J stat 557 Heike Hofmann Outline Testing Independence Local Odds Ratios Concordance & Discordance Intro to GLMs Simpson s paradox Simpson s paradox:
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationGeneralized Linear Mixed-Effects Models. Copyright c 2015 Dan Nettleton (Iowa State University) Statistics / 58
Generalized Linear Mixed-Effects Models Copyright c 2015 Dan Nettleton (Iowa State University) Statistics 510 1 / 58 Reconsideration of the Plant Fungus Example Consider again the experiment designed to
More informationLinear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model
Regression: Part II Linear Regression y~n X, 2 X Y Data Model β, σ 2 Process Model Β 0,V β s 1,s 2 Parameter Model Assumptions of Linear Model Homoskedasticity No error in X variables Error in Y variables
More informationˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T.
Exam 3 Review Suppose that X i = x =(x 1,, x k ) T is observed and that Y i X i = x i independent Binomial(n i,π(x i )) for i =1,, N where ˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T x) This is called the
More information1. Hypothesis testing through analysis of deviance. 3. Model & variable selection - stepwise aproaches
Sta 216, Lecture 4 Last Time: Logistic regression example, existence/uniqueness of MLEs Today s Class: 1. Hypothesis testing through analysis of deviance 2. Standard errors & confidence intervals 3. Model
More informationR Hints for Chapter 10
R Hints for Chapter 10 The multiple logistic regression model assumes that the success probability p for a binomial random variable depends on independent variables or design variables x 1, x 2,, x k.
More information7/28/15. Review Homework. Overview. Lecture 6: Logistic Regression Analysis
Lecture 6: Logistic Regression Analysis Christopher S. Hollenbeak, PhD Jane R. Schubart, PhD The Outcomes Research Toolbox Review Homework 2 Overview Logistic regression model conceptually Logistic regression
More informationExam Applied Statistical Regression. Good Luck!
Dr. M. Dettling Summer 2011 Exam Applied Statistical Regression Approved: Tables: Note: Any written material, calculator (without communication facility). Attached. All tests have to be done at the 5%-level.
More informationSTAT 525 Fall Final exam. Tuesday December 14, 2010
STAT 525 Fall 2010 Final exam Tuesday December 14, 2010 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points will
More informationMinimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA
Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals Satoshi AOKI and Akimichi TAKEMURA Graduate School of Information Science and Technology University
More informationGeneralized logit models for nominal multinomial responses. Local odds ratios
Generalized logit models for nominal multinomial responses Categorical Data Analysis, Summer 2015 1/17 Local odds ratios Y 1 2 3 4 1 π 11 π 12 π 13 π 14 π 1+ X 2 π 21 π 22 π 23 π 24 π 2+ 3 π 31 π 32 π
More informationSTAT 510 Final Exam Spring 2015
STAT 510 Final Exam Spring 2015 Instructions: The is a closed-notes, closed-book exam No calculator or electronic device of any kind may be used Use nothing but a pen or pencil Please write your name and
More informationSTAT 526 Spring Final Exam. Thursday May 5, 2011
STAT 526 Spring 2011 Final Exam Thursday May 5, 2011 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points will
More information13.1 Categorical Data and the Multinomial Experiment
Chapter 13 Categorical Data Analysis 13.1 Categorical Data and the Multinomial Experiment Recall Variable: (numerical) variable (i.e. # of students, temperature, height,). (non-numerical, categorical)
More informationGeneralized linear models for binary data. A better graphical exploratory data analysis. The simple linear logistic regression model
Stat 3302 (Spring 2017) Peter F. Craigmile Simple linear logistic regression (part 1) [Dobson and Barnett, 2008, Sections 7.1 7.3] Generalized linear models for binary data Beetles dose-response example
More informationPoisson Regression. The Training Data
The Training Data Poisson Regression Office workers at a large insurance company are randomly assigned to one of 3 computer use training programmes, and their number of calls to IT support during the following
More informationThe Logit Model: Estimation, Testing and Interpretation
The Logit Model: Estimation, Testing and Interpretation Herman J. Bierens October 25, 2008 1 Introduction to maximum likelihood estimation 1.1 The likelihood function Consider a random sample Y 1,...,
More informationSTAT 135 Lab 11 Tests for Categorical Data (Fisher s Exact test, χ 2 tests for Homogeneity and Independence) and Linear Regression
STAT 135 Lab 11 Tests for Categorical Data (Fisher s Exact test, χ 2 tests for Homogeneity and Independence) and Linear Regression Rebecca Barter April 20, 2015 Fisher s Exact Test Fisher s Exact Test
More informationChecking the Poisson assumption in the Poisson generalized linear model
Checking the Poisson assumption in the Poisson generalized linear model The Poisson regression model is a generalized linear model (glm) satisfying the following assumptions: The responses y i are independent
More informationLinear Methods for Prediction
Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we
More informationST3241 Categorical Data Analysis I Multicategory Logit Models. Logit Models For Nominal Responses
ST3241 Categorical Data Analysis I Multicategory Logit Models Logit Models For Nominal Responses 1 Models For Nominal Responses Y is nominal with J categories. Let {π 1,, π J } denote the response probabilities
More informationExperimental Design and Statistical Methods. Workshop LOGISTIC REGRESSION. Jesús Piedrafita Arilla.
Experimental Design and Statistical Methods Workshop LOGISTIC REGRESSION Jesús Piedrafita Arilla jesus.piedrafita@uab.cat Departament de Ciència Animal i dels Aliments Items Logistic regression model Logit
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models Generalized Linear Models - part III Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs.
More informationLecture 9 STK3100/4100
Lecture 9 STK3100/4100 27. October 2014 Plan for lecture: 1. Linear mixed models cont. Models accounting for time dependencies (Ch. 6.1) 2. Generalized linear mixed models (GLMM, Ch. 13.1-13.3) Examples
More informationST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples
ST3241 Categorical Data Analysis I Generalized Linear Models Introduction and Some Examples 1 Introduction We have discussed methods for analyzing associations in two-way and three-way tables. Now we will
More informationPoisson Regression. Gelman & Hill Chapter 6. February 6, 2017
Poisson Regression Gelman & Hill Chapter 6 February 6, 2017 Military Coups Background: Sub-Sahara Africa has experienced a high proportion of regime changes due to military takeover of governments for
More informationCohen s s Kappa and Log-linear Models
Cohen s s Kappa and Log-linear Models HRP 261 03/03/03 10-11 11 am 1. Cohen s Kappa Actual agreement = sum of the proportions found on the diagonals. π ii Cohen: Compare the actual agreement with the chance
More information,..., θ(2),..., θ(n)
Likelihoods for Multivariate Binary Data Log-Linear Model We have 2 n 1 distinct probabilities, but we wish to consider formulations that allow more parsimonious descriptions as a function of covariates.
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationSTAT 705: Analysis of Contingency Tables
STAT 705: Analysis of Contingency Tables Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Analysis of Contingency Tables 1 / 45 Outline of Part I: models and parameters Basic
More informationMcGill University. Faculty of Science. Department of Mathematics and Statistics. Statistics Part A Comprehensive Exam Methodology Paper
Student Name: ID: McGill University Faculty of Science Department of Mathematics and Statistics Statistics Part A Comprehensive Exam Methodology Paper Date: Friday, May 13, 2016 Time: 13:00 17:00 Instructions
More informationModel Estimation Example
Ronald H. Heck 1 EDEP 606: Multivariate Methods (S2013) April 7, 2013 Model Estimation Example As we have moved through the course this semester, we have encountered the concept of model estimation. Discussions
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More informationLecture 2: Poisson and logistic regression
Dankmar Böhning Southampton Statistical Sciences Research Institute University of Southampton, UK S 3 RI, 11-12 December 2014 introduction to Poisson regression application to the BELCAP study introduction
More information