MSH3 Generalized linear model

Size: px
Start display at page:

Download "MSH3 Generalized linear model"

Transcription

1 Contents MSH3 Generalized linear model 5 Logit Models for Binary Data The Bernoulli and binomial distributions Mean, variance and higher order moments Normal limit Poisson limit Link functions Contingency Tables Maximum Likelihood Estimation Goodness of Fit Statistics Exact tests for logistic models Fishers Exact Test for binomial data Exact test for binary data Regression Diagnostics SydU MSH3 GLM (2015) First semester Dr. J. Chan 172

2 5 Logit Models for Binary Data 5.1 The Bernoulli and binomial distributions Mean, variance and higher order moments Define Y i = { 1 if the i-th subject has the attribute of interest, 0 otherwise. Then Y i follows a Bernoulli distribution with parameter π i and the pmf is f(y i ) = Pr(Y i = y i ) = π y i i (1 π i) y i The expected value and variance of Y i are E(Y i ) = µ i = π i ; and Var(Y i ) = σ 2 i = π i (1 π i ). Note: the variance is not constant as in a linear model but depends on the probability π i. Define 1. Y ij, the j-th unit in group i, follows a Bernoulli distribution, 2. the n i observations in each group are independent, and 3. each has the same probability π i of having the attribute of interest. n i Then Y i = Y ij, the number out of n i having the attribute, follows a j=1 binomial distribution with parameters π i and n i, that is, The pdf of Y i is given by Pr(Y i = y i ) = Y i B(n i, π i ). ( ni y i ) π y i (1 π i ) n i y i SydU MSH3 GLM (2015) First semester Dr. J. Chan 173

3 for y i = 0, 1,..., n i. The mean and variance of Y i are E(Y i ) = µ i = n i π i and Var(Y i ) = σ 2 i = n i π i (1 π i ). Dropping subscript i, the moment generating function for Y = Y Y n where Y j Ber(π) is M Y (t) = E{exp[t(Y Y n )]} = = n E[exp(tY j )] j=1 n [(1 π) exp(0t) + π exp(1t)] = [1 π + π exp(t)] n. j=1 The moments are E(Y i ) = M (i) i Y (0) = M t i Y (0). We have Hence M Y (t) = nπe t [1 π + πe t ] n 1 M Y (t) = nπe t [1 π + πe t ] n 2 [1 π + πe t + (n 1)πe t ] E(Y ) = M Y (0) = nπ E(Y 2 ) = M Y (0) = nπ[1 + (n 1)π] Var(Y ) = E(Y 2 ) E(Y ) 2 = nπ[1 + (n 1)π] n 2 π 2 = nπ(1 π) The cumulant generating function is K Y (t) = ln E[exp(tY )] = ln M Y (t) = n ln[1 π + π exp(t)] where κ 0 = K Y (0) = n ln 1 = 0, κ i = K (i) i Y (0) = K t i Y (0). We have K Y πe t (t) = n 1 π + πe t K Y (t) = n (1 π + πet )πe t π 2 e 2t (1 π + πe t ) 2 SydU MSH3 GLM (2015) First semester Dr. J. Chan 174

4 The first four cumulants are κ 1 = µ (mean), κ 2 = σ 2 (variance), κ 3 = γ (skewness), κ 4 (kurtosis) are κ 1 =K Y πe 0 (0) = E(Y ) = n 1 π + πe = nπ, 0 κ 2 =K Y (0) = E{[Y E(Y )] 2 } = n (1 π + πe0 )πe 0 π 2 e 2 0 (1 π + πe 0 ) 2 = nπ(1 π), κ 3 =K Y (0) = E{[Y E(Y )] 3 } = nπ(1 π)(1 2π), κ 4 =K Y (0) = E{[Y E(Y )] 4 } 3{E[(Y E(Y )) 2 ]} 2 =nπ(1 π)[1 6π(1 π)]. By Taylor expansion: f(x) = f(0) +f (0)x + f (0) 2! x 2 + f (3) (0) 3! x f (n) (0) n! x n +..., we have K Y (t) = i= Normal limit For Normal limit, κ i t n n! = µt + σ2t (since κ 0 = 0) Pr(Y y) 1 Φ(z ) and Pr(Y y) Φ(z + ) where Z ± = Y nπ ± 1 2. nπ(1 π) SydU MSH3 GLM (2015) First semester Dr. J. Chan 175

5 The error is asymphtotically is O(n 1 2) and is O(n 1 ) when π = 1 2. The rate of convergence is faster when π = 1 2 or κ 3 = 0. The moment and cumulant generating functions for the normal r.v. Z are M Z (t) = exp (µt + σ2 t 2 ) 2 and K Z (t) = µt + σ2 t 2 2. Since K Z (0) = µ 0+ σ2 0 2 = 0, K 2 Z(0) = µ+ σ2 2t 2 t=0 = µ, K Z(0) = σ 2, the cumulant tends to κ = (0, 0, 1, 0, 0... ) since µ = 0 and σ 2 = 1. Normal approximation is good when nπ(1 π) 2 and z, z Poisson limit When π 0 and n such that µ = nπ remains fixed, Pr(Y y) F P (y) where F P ( ) is the cdf of Poisson since the cumulant generating function of Y tends to K Y (t) = µ ln{1 + π[exp(t) 1]} π { } = µ(e t 1) ln [1 + π(e t 1 1)] π(e t 1) { } π o µ(e t 1) ln lim [1 + π(e t 1 1)] π(e t 1) π 0 }{{} e = µ[exp(t) 1] which is the cumulant generating function of a Poisson rv with mean µ. The order of error is O(n 1 ). Note that e x = lim n (1 + x n )n = e x. SydU MSH3 GLM (2015) First semester Dr. J. Chan 176

6 5.2 Link functions MSH3 Generalized linear model The logit (log of the odds) is the canoncial link for binomial data which maps the probabilities π i to a linear function of the covariates: η i = x π i iβ = logit(π i ) = ln R 1 π i where β is a vector of regression coefficients. The logits map probabilities from the range (0, 1) to the entire real line R. The following figure illustrates three transformations or link functions which are continuous and increasing. By solving for π i, the inverse transformation is π i = logit 1 (η i ) = exp(x i β) 1 + exp(x i β) From π i = β j π i (1 π i ), x ij a small change in x ij has a larger effect if π i is near 0.5 where all curves are steepest. Besides logit link, link functin can be any transformation that maps probabilities into the real line, in particular, the cdf F ( ) π i = F (η i ), η i = F 1 (π i ) for < η i <. SydU MSH3 GLM (2015) First semester Dr. J. Chan 177

7 This corresponds to a latent continuous r.v. Yi which follows these distributions such that { 1 if Y Y i = i > θ, 0 if Yi θ. Without lose of generality, we set θ = 0 for the standardized Yi. π i = Pr(Y i = 1) = Pr(Y i > 0). Popular choices of cdf s are the logistic (logit), normal (probit), and extreme value (complementary log-log) distributions. 1. Logistic disribution: Logistic distribution has the pdf and cdf f(y i µ i, σ 2 ) = F (y i µ i, σ 2 ) = exp( y i µ i σ ) σ [ 1 + exp( y i µ i σ )] 2 = 1 σ exp( y i µ i σ ) 1 + exp( y i µ i σ ) = π i. ( exp( y i µ i σ ) 1 + exp( y i µ i σ ) ) ( ) exp( y i µ i σ ) The mean and variance are µ and π2 σ 2 3. Setting µ = 0, σ = 1 and y i = η i, F (η i ) = π i or η i = F 1 (π i ) = logit(π i ). 2. Normal distribution: With cdf F (.) = Φ(.), the link function g = Φ 1 defined as π i = Φ(η i ) = Φ(x iβ) or η i = x iβ = Φ 1 (π i ) is called probit link. Both logistic and probit links are symmetric: g(π) = g(1 π) SydU MSH3 GLM (2015) First semester Dr. J. Chan 178

8 since ln π 1 π = ln 1 π π = ln 1 π 1 (1 π) = g(1 π) and Φ 1 (π) = Φ 1 (1 π) and hence they are popular links. They give similar results but have different variances. In logit model, by using a standard logistic error term, we have effectively set sd(y ) = σ = π/ 3. Thus the coefficients in a logit model should be standardized by dividing by π/ 3 before comparing them with probit coefficients. In the links plot, the logit line is the logit divided by π/ Log-Weibull (extreme value) distribution: With the cdf, the link function g = F 1 defined as π i = F (η i ) = 1 e eη i or η i = F 1 (π i ) = ln[ ln(1 π i )], is called the complementary log-log. The distribution is not symmetric. For small values of π i, the link function is close to the logit. As π i increases, the link function approaches infinity more slowly than either of the probit or logit. To compare results with probit analysis, one should standardize them, by dividing by sd(y ) = σ = π and 6 with logit analysis, by dividing by 2. The link function also has a direct interpretation in terms of hazard ratios and has practical applications in hazard models. In summary, ( ) π Logit: η = g(π) = ln, π = g 1 (η) = exp(η) 1 π 1 + exp(η) Probit: η = g(π) = Φ 1 (π), π = g 1 (η) = Φ(η) C. log-log: η = g(π) = ln[ ln(1 π)], π = g 1 (η) = 1 exp[ exp(η)]. SydU MSH3 GLM (2015) First semester Dr. J. Chan 179

9 Example: (Contraceptive use) There are 507 users among 1607 women, so the estimated probability is ˆπ = = Using logit link, the odds are The logit is ˆη = ln Using probit link, ˆπ 1 ˆπ ˆπ 1 ˆπ = = ln(0.4609) = ˆπ = Φ(η) = ˆη = Φ 1 (0.3155) = Using cloglog link, = to one. ˆπ = 1 exp( exp(η)) = ˆη = ln[ ln( )] = > y=c(rep(1,507),rep(0, )) > prop.sam=507/1607 > prop.sam [1] > glm1=glm(y~1, family=binomial(link=logit)) > summary(glm1) Call: glm(formula = y ~ 1, family = binomial(link = logit)) Deviance Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) <2e-16 *** SydU MSH3 GLM (2015) First semester Dr. J. Chan 180

10 --- Signif. codes: 0?**?0.001?*?0.01??0.05??0.1??1 (Dispersion parameter for binomial family taken to be 1) Null deviance: on 1606 degrees of freedom Residual deviance: on 1606 degrees of freedom AIC: Number of Fisher Scoring iterations: 4 > eta1=glm1$coeff > eta1s=eta1/(pi/sqrt(3)) #standardize > pi1=exp(eta1)/(1+exp(eta1)) > glm2=glm(y~1, family=binomial(link=probit)) > summary(glm2) Call: glm(formula = y ~ 1, family = binomial(link = probit)) Deviance Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) <2e-16 *** --- Signif. codes: 0?**?0.001?*?0.01??0.05??0.1??1 (Dispersion parameter for binomial family taken to be 1) SydU MSH3 GLM (2015) First semester Dr. J. Chan 181

11 Null deviance: on 1606 degrees of freedom Residual deviance: on 1606 degrees of freedom AIC: Number of Fisher Scoring iterations: 4 > eta2=glm2$coeff > pi2=pnorm(eta2,0,1) > glm3=glm(y~1, family=binomial(link=cloglog)) > summary(glm3) Call: glm(formula = y ~ 1, family = binomial(link = cloglog)) Deviance Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) <2e-16 *** --- Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for binomial family taken to be 1) Null deviance: on 1606 degrees of freedom Residual deviance: on 1606 degrees of freedom AIC: SydU MSH3 GLM (2015) First semester Dr. J. Chan 182

12 Number of Fisher Scoring iterations: 5 > eta3=glm3$coeff > eta3s=eta3/(pi/sqrt(6)) #standardize > pi3=1-exp(-exp(eta3)) > c(pi1,pi2,pi3) (Intercept) (Intercept) (Intercept) > c(eta1,eta3,eta1s,eta2,eta3s) #eta1,3s comparable to eta2 (Intercept) (Intercept) (Intercept) (Intercept) (Intercept) Parameter estimates using logit and probit link are similar but are different from that using cloglog link. These may due to the similarities and differences between the link functions. SydU MSH3 GLM (2015) First semester Dr. J. Chan 183

13 Contingency Tables Binary data also arises in contingency tables. Suppose we have two binary variables U and W W = 0 W = 1 π ij = Pr(U = i, W = j), i, j = 0, 1 U = 0 π 00 π 01 p (3) Condition on W, the probabilities U = 1 π 10 π 11 p (4) p (1) p (2) π 10 p (1) = Pr(U = 1 W = 0) = π 10 + π 00 p (2) π 11 = Pr(U = 1 W = 1) =. π 11 + π 01 and The difference between these two probabilities on the logistic scale is [ ] [ ] [ ] [ ] [ ] p (1) p (2) π10 π11 π11 π 00 ln +ln = ln +ln = ln 1 p (1) 1 p (2) π 00 π 01 π 10 π 01 If we condition on U, then the probabilities =. π 01 p (3) = Pr(W = 1 U = 0) = π 01 + π 00 p (4) π 11 = Pr(W = 1 U = 1) =. π 11 + π 10 So [ ] [ ] p (3) p (4) ln +ln 1 p (3) 1 p (4) = ln [ π01 π 00 ] +ln [ π11 π 10 and ] [ ] π11 π 00 = ln =. π 10 π 01 Hence is a measure of association between U and W. If U and W are independent, π ij = Pr(U = i) Pr(W = j) implies = ln π 11π 00 π 10 π 01 = ln 1 = 0. SydU MSH3 GLM (2015) First semester Dr. J. Chan 184

14 Further if = 0 which implies π 10 π 01 = π 00 π 11 then U and W are independent since Furthermore Pr(U = 0) Pr(W = 0) = (π 00 + π 01 ) (π 00 + π 10 ) = π 00 (π 01 + π 10 + π 00 ) + π 10 π 01 = π 00 (π 01 + π 10 + π 00 ) + π 00 π 11 = π 00 (π 01 + π 10 + π 00 + π 11 ) = π 00 = Pr(U = 0, W = 0) Pr(U = 0, W = 1) = Pr(U = 0) Pr(U = 0, W = 0) = Pr(U = 0) Pr(U = 0) Pr(W = 0) = Pr(U = 0 ) Pr(W = 1) Thus = 0 π 11 π 00 = π 10 π 01 iff U and W are independent. Moreover Cov(U, W ) = E(UW ) E(U) E(W ) = π 11 (π 11 + π 10 ) (π 11 + π 01 ) = π 11 (1 π 10 π 01 π 11 ) π 10 π 01 = π 11 π 00 π 10 π 01. Thus U and W are independent iff π 11 π 00 = π 10 π 01 iff Cov(U, W ) = 0. This gives another reason for choosing logistic model. We can estimate and make inferences about the association between U and W. Conditional samples are common from retrospective studies, e.g. using hospital records to classify patients with lung cancer, say according to smoking history or prospective stuides, e.g. following a group of smokers and non-smokers over time and observing their cancer history. SydU MSH3 GLM (2015) First semester Dr. J. Chan 185

15 Example: (Smokers) The 2 2 table is No lung cancer (j = 0) Lung cancer (j = 1) Total Non smokers (i = 0) Smokers (i = 1) Total We classify patients according to without/ with lung cancer (W, j = 0, 1) and nonsmoker/smoker (U, smoking history, i = 0, 1) and compare the proportions of smokers for patients with and without lung cancer. Hence n 1 = 43 and n 2 = 63 are fixed in the design. Here the outcome is smoker/nonsmoker and predictor is without/ with lung cancer. It is possible to consider the study of cancer rate (as outcome) across the smoker and non-smoker groups (predictor). > nonsmoke=c(11,3) > smoke=c(32,60) > total=smoke+nonsmoke > y=cbind(smoke,nonsmoke) > cancer=factor(c(1,2)) > glm1=glm(y~cancer, family=binomial) # a saturated model with 2 par. > summary(glm1) Call: glm(formula = y ~ cancer, family = binomial) Deviance Residuals: [1] 0 0 Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) ** cancer ** --- Signif. codes: 0 *** ** 0.01 * SydU MSH3 GLM (2015) First semester Dr. J. Chan 186

16 (Dispersion parameter for binomial family taken to be 1) Null deviance: on 1 degrees of freedom Residual deviance: on 0 degrees of freedom AIC: 10.9 Number of Fisher Scoring iterations: 4 > glm1$fitted #prop. of smoker without and with lung cancer > smoke/total [1] > beta=glm1$coeff > phat=c(exp(beta[1])/(1+exp(beta[1])), exp(beta[1]+beta[2])/(1+exp(beta[1]+beta[2]))) > phat (Intercept) (Intercept) The odds of smoking (i = 1) to non-smoking (i = 0) among patients without (j = 0) and with (j = 1) lung cancer are ( ) π1 0 No lung cancer: ln 1 π 1 0 ( ) π1 1 Lung cancer: ln 1 π 1 1 = β 0 π 10 π 00 = e β 0 = exp(1.0678) = = β 0 + β 1 π 11 π 01 = e β 0+β 1 = exp(2.9957) = where π 1 0 = Pr(i = 1 j = 0) say. The odds of smoking given lung cancer to the odds of smoking given no lung cancer (the odds ratio) or the logistic difference is exp( ) = π 11π 00 = eβ0+β1 π 10 π 01 e β 0 = exp(β 1 ) = exp(1.9279) = = SydU MSH3 GLM (2015) First semester Dr. J. Chan 187

17 To test H 0 : β 1 = 0, we use MSH3 Generalized linear model z = β 1 SE(β 1 ) = = 2.806, z N (0, 1), or Deviance = 9.722, d χ 2 1. The p-values are and respectively. We reject H 0 and conclude that there is significant diff. in the odds of smoking among patients with and without lung cancer. The log-likelihood function and the deviance are l = i [ y i ln ( πi 1 π i ) + n i ln(1 π i ) + ln( n i y i ) ] D = i 2{y i ln(y i /µ i ) + (n i y i ) ln[(n i y i )/(n i µ i )]} > AIC=-2*sum(smoke*log(phat/(1-phat))+total*log(1-phat)+ log(choose(total,smoke)))+2*2 > AIC [1] > Dev=2*sum(smoke*log(smoke/(total*phat))+ nonsmoke*log(nonsmoke/(total-total*phat))) > Dev #so deviance is 0 [1] 0 > glm0=glm(y~1, family=binomial) # a null model with intercept > summary(glm0) Call: glm(formula = y ~ 1, family = binomial) Deviance Residuals: SydU MSH3 GLM (2015) First semester Dr. J. Chan 188

18 Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) e-11 *** --- Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for binomial family taken to be 1) Null deviance: on 1 degrees of freedom Residual deviance: on 1 degrees of freedom AIC: Number of Fisher Scoring iterations: 5 > beta0=glm0$coeff > phat0=c(exp(beta0)/(1+exp(beta0)),exp(beta0)/(1+exp(beta0))) > phat0 (Intercept) (Intercept) > Dev0=2*sum(smoke*log(smoke/(total*phat0))+ nonsmoke*log(nonsmoke/(total-total*phat0))) > Dev0 [1] > Dev0i=sqrt(2*(smoke*log(smoke/(total*phat0))+ nonsmoke*log(nonsmoke/(total-total*phat0)))) > Dev0i (Intercept) (Intercept) Both refer to asymptotic results, not the exact tests. The Wald test can be used to calculate a 100(1 α)% confidence interval for β j : β j ± z 1 α/2 SE( β j ). If we change the outcome to cancer and perform the analysis again, we get the following result: > cancer=c(3,60) SydU MSH3 GLM (2015) First semester Dr. J. Chan 189

19 > nocancer=c(11,32) > total=cancer+nocancer > y=cbind(cancer,nocancer) > smoke=factor(c(1,2)) > glm2=glm(y~smoke,family=binomial) > summary(glm2) #beta1 the same Call: glm(formula = y ~ smoke, family = binomial) Deviance Residuals: [1] 0 0 Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) * smoke ** --- Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for binomial family taken to be 1) Null deviance: e+00 on 1 degrees of freedom Residual deviance: e-15 on 0 degrees of freedom AIC: Number of Fisher Scoring iterations: 3 > glm2$fitted #prop. of cancer for non-smokers and smokers > cancer/total [1] Note that β 1 is the same as the previous model because exp(β 1 ) = exp( ) = π 11π 00 π 10 π 01. SydU MSH3 GLM (2015) First semester Dr. J. Chan 190

20 Both β 1 measure the association between cancer and smoker and hence it does not matter which factor is treated as outcome and which as predictor. SydU MSH3 GLM (2015) First semester Dr. J. Chan 191

21 Example: (Laundry Detergent) 1008 customers are cross classified in an experiment comparing two detergents, a new product X and a standard product M on three factors (Ries and Smith 1963). The table for the factorial design for y i, the number of customers out of n i who prefer the new product X, is Non-user of M β 1 Previous user of M β 2 Temperature Temperature Water softness Low γ 1 High γ 2 Low γ 1 High γ 2 Hard α 1 y ijk n ijk Medium α 2 y ijk n ijk Soft α 3 y ijk n ijk The full ANOVA model is ( ) pijk ln = µ + α i + β j + γ k + (αβ) ij + (αγ) ik + (βγ) jk + (αβγ) ijk, 1 p ijk i = 1, 2, 3; j, k = 1, 2 where p ijk = prob. of prefering X at softness i, user status j and temp k. To test H 0 : (αβγ) ijk = 0, the change in deviance is χ 2 2. The p-value is The data are consistent with H 0. To test for the 2 factor interaction, H 0 : (αβ) ij = (αγ) ik = (βγ) jk = 0, the change in deviance is = χ 2 5. The p-value is The data are consistent with H 0. > X=c(68,42,37,24,66,33,47,23,63,29,57,19) > T=c(110,72,89,67,116,56,102,70,116,56,106,48) > M=T-X > y=cbind(x,m) SydU MSH3 GLM (2015) First semester Dr. J. Chan 192

22 > soft=factor(c(1,1,1,1,2,2,2,2,3,3,3,3)) > user=factor(c(1,1,2,2,1,1,2,2,1,1,2,2)) > temp=factor(c(1,2,1,2,1,2,1,2,1,2,1,2)) > dat=data.frame(x,t,soft,user,temp) > glm2=glm(y~soft*user*temp,family=binomial) #a saturated model > summary(glm2) Call: glm(formula = y ~ soft * user * temp, family = binomial) Deviance Residuals: [1] Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) * soft soft user ** temp soft2:user soft3:user * soft2:temp soft3:temp user2:temp soft2:user2:temp soft3:user2:temp Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for binomial family taken to be 1) Null deviance: e+01 on 11 degrees of freedom #12-1 Residual deviance: e-14 on 0 degrees of freedom AIC: Number of Fisher Scoring iterations: 3 SydU MSH3 GLM (2015) First semester Dr. J. Chan 193

23 > anova(glm2,test="chisq") Analysis of Deviance Table Model: binomial, link: logit Response: y Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev P(> Chi ) NULL soft user e-06 temp soft:user soft:temp user:temp soft:user:temp e To evaluate the product loyality (user status) at different temp, the reduced model is ( ) pijk ln = µ + β j + γ k. 1 p ijk Given low temperature (γ 1 = 0) and non-users of M: p i11 = prev. users of M: p i21 = e µ 1 + e = e = µ 1 + e e µ+β e µ+β 2 = e e = < = p i11. The model highlights significant product loyality (p-value of β 1 =0.0000). The temperature effect which is just marginally insignificant probably worth a mention in a final report. Since γ 2 = < 0 for hot water, the preference for X is stronger amongst those using cold water. > glm3=glm(y~user+temp,family=binomial) SydU MSH3 GLM (2015) First semester Dr. J. Chan 194

24 > summary(glm3) Call: glm(formula = y ~ user + temp, family = binomial) Deviance Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) *** user e-06 *** temp Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for binomial family taken to be 1) Null deviance: on 11 degrees of freedom Residual deviance: on 9 degrees of freedom #12-3 AIC: Number of Fisher Scoring iterations: 3 > anova(glm3,test="chisq") Analysis of Deviance Table Model: binomial, link: logit Response: y Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev P(> Chi ) NULL user e-06 temp SydU MSH3 GLM (2015) First semester Dr. J. Chan 195

25 > beta3=glm3$coeff > beta3 (Intercept) user2 temp > p1=exp(beta3[1])/(1+exp(beta3[1])) > p2=exp(beta3[1]+beta3[2])/(1+exp(beta3[1]+beta3[2])) > c(p1,p2) (Intercept) (Intercept) SydU MSH3 GLM (2015) First semester Dr. J. Chan 196

26 5.4 Maximum Likelihood Estimation Taking logs of the pmf and except for a constant involving the combinatorial terms, the log-likelihood function is n l(β) = ln L(β) y i ln(π i ) + (n i y i ) ln(1 π i ). i=1 To develop a Fisher scoring procedure for the ML estimates, we take the score function l (β) where l n [ yi = n ] i y i πi η i n y i n i π i π i = x ir β r π i=1 i 1 π i η i β r π i=1 i (1 π i ) η i n = (y i n i π i )x ir = [X (Y µ)] r i=1 since π i = π i (1 π i ) for linear logistic model and the information η i matrix E[l (β)] where ( 2 ) ( n ) l E = E (y i n i π i )x ir β r β s β i=1 s ( n = E W is a diagonal matrix of weights i=1 = [X W X] rs, n i π i (1 π i )x ir x is ) w ii = n i π i (1 π i ) = µ i (n i µ i )/n i = 1/Var(z i ) and the working dependent variable z i z i = η i + (y i µ i ) η i µ i = η i + (y i µ i )n i µ i (n i µ i ), SydU MSH3 GLM (2015) First semester Dr. J. Chan 197

27 since η i µ i = µ i ln Var(z i ) = µ i n i µ i = n i µ i µ i (n i µ i ) + µ i (n i µ i ) 2 = n 2 i µ 2 i (n i µ i ) 2Var(y i) = n i µ i (n i µ i ) and n 2 i µ 2 i (n i µ i ) 2µ i(n i µ i )/n i = [µ i (n i µ i )/n i ] 1. Hence Var(z) = W 1. The procedure is equivalent to iteratively reweighted least squares (IRLS). Given a current estimate β, we calculate the linear predictor ˆη = x β, the fitted values ˆµ = logit 1 (η), the working dependent variable z and weights W. Then we regress z on the covariates to obtain the weighted least squares estimate β = (X W X) 1 X W z. The large sample variance of the resulting estimate is given by Var( β) = (X W X) 1 X W Var(z)W X(X W X) 1 = (X W X) 1 where Var(z) = W 1 and W is the weight matrix evaluated in the last iteration. Example: (Smokers) > smoke=c(32,60) > nonsmoke=c(11,3) > total=smoke+nonsmoke > x=c(0,1) > y=smoke > n=2 > one=c(rep(1,n)) > X=cbind(one,x) > beta=matrix(1,2,1) > for (i in 1:10){ + eta=x%*%beta SydU MSH3 GLM (2015) First semester Dr. J. Chan 198

28 + pi=exp(eta)/(1+exp(eta)) + mu=total*pi + va=total*pi*(1-pi) + W=matrix(0,n,n) + for (j in 1:n) {W[j,j]=va[j]} + z=eta+(y-mu)/va + XWX=t(X)%*%W%*%X + XWXI=solve(XWX) + XWZ=t(X)%*%W%*%z + beta=xwxi%*%xwz + VA=diag(XWXI) + se=sqrt(va) + names(se) = NULL + result=c(i,beta[1],se[1],beta[2],se[2]) + print(result) + } [1] [1] [1] [1] [1] [1] [1] [1] [1] [1] > XWXI # covariance matrix one x one x SydU MSH3 GLM (2015) First semester Dr. J. Chan 199

29 5.5 Goodness of Fit Statistics 1. Deviance: a measure of discrepancy between observed and fitted values defined as n [ ( ) ( )] yi ni y i D = 2 y i ln + (n i y i ) ln χ 2 ˆµ i n i ˆµ n p, i i=1 as n i where n is the number of groups and p is the number of parameters including the constant. 2. The likelihood ratio test: to compare two nested models ω 1 ω 2 based on the difference between their deviances D(ω 1 ) and D(ω 2 ) including variables in X 1 and X with p 1 and p elements respectively where ( ) β1 X = (X 1 X 2 ) and β = the null hypothesis is H 0 : β 2 = 0 and the test statistic is D(ω 1 ) D(ω 2 ) χ 2 p 2. Likelihood ratio tests in GLM are based on scaled deviances, obtained by dividing the deviance by a scale factor, which is one for binomial data. 3. Pearson s chi-squared: the sum of the squared difference between observed and fitted values y i and ˆµ i, divided by the variance of y i, µ i (n i µ i )/n i defined as X 2 = i (z i η i ) 2 = n i(y i ˆµ i ) 2 ˆµ i (n i ˆµ i ) = i [ (yi ˆµ i ) 2 β 2 ˆµ i + [(n i y i ) (n i ˆµ i )] 2 n i ˆµ i ] χ 2 n p. This statistic is also the sum over both successes and failures of the observed minus expected squared over expected. It is asymptotically equivalent to the deviance or likelihood-ratio chi-squared statistic. SydU MSH3 GLM (2015) First semester Dr. J. Chan 200

30 5.6 Exact tests for logistic models If n i are not large enough to apply the asymptotic χ 2 tests then we construct small sample exact tests for the hypotheses of interest. These tests should be considered whenever the observed binomial values in any cell is less than 5. CLT for binomial dist. requires nπ 5 and n(1 π) 5, i.e. N is large or π is close to 1 2. Consider the model θ i = ln ( πi 1 π i ) = p x ij β j = η i where the x ij are known constants. The likelihood of an observed binary sample y 1,.., y n is L= n i=1 π y i i (1 π i) 1 y i = n i=1 e η iy i 1 + e η i j=1 [ = e i j x ijβ j y i p n i=1 (1 + =exp eη i ) j=1 β j ( n i=1 x ij y i )] / n i=1(1 + e η i ) Let t j = n x ij y i then {t 1,..., t p } is sufficient for β 1,..., β p. The joint i=1 distribution of t 1,..., t p is obtained by summing over all appropriate binary sequences which generate particular t 1, t 2,..., t p. ( ) p c(t 1,..., t p ) exp β j t j Pr(T 1 = t 1,..., T p = t p ) = j=1 n (1) (1 + e η i ) where c(t 1,..., t p ) is a combinatorial coefficient, that is, the number of combination of {x 1,..., x n } that gives the same set of {t 1,..., t p }. Since T 1,..., T p 1 are sufficient for β 1,..., β p 1, the conditional distribution of T p given T 1,..., T p 1 will only depend on β p. Pr(T p = t p T 1 = t 1,..., T p 1 = t p 1 ) = i=1 c(t 1,..., t p )e β pt p u c(t 1,..., t p 1, u)e β pu (2) SydU MSH3 GLM (2015) First semester Dr. J. Chan 201

31 5.6.1 Fishers Exact Test for binomial data Assume p = 2 and x ij = 0, 1. The data are given by a 2 2 table: Group 1 Group 2 Total Success Y 1 Y 2 S Failure n 1 Y 1 n 2 Y 2 n S n 1 n 2 n To test H 0 : β 1 = 0, we show that the conditional distribution reduces to a ratio of combinatorial terms. The probability of success is π 1 = eβ e (η β 1 = β 0 ) in Group 1 and π 2 = eβ0+β1 (η e β 2 = β 0 + β 1 ) 0+β 1 in Group 2. Given n 1 and n 2, the likelihood using (1) is L(y 1, y 2 n 1, n 2 ) = (n 1 Y 1 ) ( n 2 Y 2 )e β 0(Y 1 +Y 2 )+β 1 Y 2 (1 + e β 0 ) n 1 (1 + e β 0 +β 1) n 2 where t 0 = S = Y 1 + Y 2 and t 1 = Y 2 are sufficient statistics for β 0 and β 1, c(t 0, t 1 ) = ( n 1 Y 1 ) ( n 2 Y 2 ) choosing Y 1 1 from n 1 trials and Y 2 1 from n 2 trials to result in Y 2 1 from gp 2 out of Y 1 + Y 2 total 1 and 1 β j t j = β 0 (Y 1 + Y 2 ) + β 1 Y 2. Using (2), the conditional probability is j=0 n 2 or n 2 s n 1 y 2 0 Pr(Y 2 = y 2 S = s) = s y 1 + y 2 = s s s n 2 n 1 or n 1 y 1 ( n 1 s y 2 ) ( n 2 y 2 )e β 0s+β 1 y 2 min(n 2,s) ( n 1 s u) ( n 2 u=max(0,s n 1 ) = (ns y 1 ) ( n2 2 y 2 )e β 1y 2 ( n 1 s u) ( n 2 u )e β 1u u β 1 =0 = (n 1 s y 2 )( n 2 y 2 ) ( n s) u )e β 0s+β 1 u since s n 1 y 2 n 2, s SydU MSH3 GLM (2015) First semester Dr. J. Chan 202

32 which is the hypergeometric distribution with E(Y 2 S = s) = s n 2 n = s ˆπ 2 and (3) Var(Y 2 S = s) = n ( ) ( ) 1n 2 s n s n s = sˆπ n 2 2 (1 ˆπ 2 ) (4) n 1 n 1 ( ) n s where is the finite population correction factor for sampling n 1 s units from n without replacement. Hence an exact test of H 0 : β 1 = 0 (i.e. π 1 = π 2 ) can be obtained by calculating the p-value based on Y 2 : Y 2 n 2 s n 1 2 ) n1 n 2 s n 2 ( n s n 1 We compare it with a normal distribution if n 1 and n 2 are moderate. Example: (Lung Cancer) Test H 0 : β 1 = 0 vs H 1 : β 1 > 0, No lung cancer Lung cancer Total Group 1 Group 2 Smokers Non smokers Total We have n 1 = 43, n 2 = 63, n = 106, Y 1 = 32, Y 2 = 60 and S = 92. The values in blue are sufficient for the table. The exact p-value is Pr(Y 2 60 S = 92) = min(n 2,s) u=60 ( n 1 s u)( n 2 u ) ( n s) = 63 ( u)( 63 u ) ( 106 u=60 92 ) = With normal approximation to hypergeometric distribution, z 0 = Y 2 n 2 s n 1 ) 2 = 60 63(92)/ = 2.80 n1 n 2 s n 2 ( n s n 1 43(63)(92) The approximate p-value = Pr(Z 2.80) = SydU MSH3 GLM (2015) First semester Dr. J. Chan 203

33 5.6.2 Exact test for binary data Suppose we have n independent observations (x 1, y 1 ),..., (x n, y n ) and a linear logistic model η i = β 0 + β 1 x i. The sufficient statistics for β 0 and β 1 are n n T 0 = y i and T 1 = x i y i. i=1 Given T 0 = t 0 = s (exactly t 0 of y i is 1), the statistic T 1 is the sample total of a sample of size t 0 drawn from {x 1,..., x n } without replacement. If β 1 = 0, then the probability of success is constant over all n trials, regardless of x i, and so all distinct samples of size t 0 drawn from {x 1,..., x n } are equally likely. Let x = 1 n x i and v = 1 n (x i x) 2. If β 1 = 0 then n n i=1 i=1 i=1 E(T 1 T 0 = t 0 ) = t 0 x and Var(T 1 T 0 = t 0 ) = t 0 v ( ) n t0 n 1 We can use the normal approximation to the conditional distribution of T 1 given T 0 = t 0 to calculate the p-value for testing H 0 : β 1 = 0 or use the exact permutation test. In case if {x 1,..., x n } contains n 1 0 and n 2 1 as indicators of group 2, we have t 0 = s, t 1 = y 2, x = 1 n x i = n 2 n n. Then i=1 E(T 1 T 0 = t 0 ) = t 0 x = s n 2 ( n and ) n t0 Var(T 1 T 0 = t 0 ) = t 0 v = s n 1n 2 n 1 n 2 ( ) n s n 1 SydU MSH3 GLM (2015) First semester Dr. J. Chan 204

34 are the same as (3) and (4) since v = 1 n (x i x) 2 = 1 n n i=1 [ n 1 (0 n 2 n )2 + n 2 (1 n 2 n )2 ] = 1 n 3(n 1n n 2 n 2 1) = n 1n 2 n 3 (n 1 + n 2 ) = n 1n 2 n 2. Example: Suppose we have n = 7 trials and x i = i. We are given t 0 = 3, i.e. we observe 3 successes. The sample moments for x i ={1,2,3,4,5,6,7} are x = 4 and v = 1 7 (i 4) 2 = 4. 7 If β 1 = 0, there are ( 7 3) = 35 equally likely sets of size 3 which can be drawn. If we observe the sum of x i for the 3 successes is t 1 = 16, then the exact p-value is i=1 Pr(T 1 16 T 0 = 3, β 1 = 0) = 4 35 = as there are 4 subsets giving T 1 16 and they are (5,6,7), (4,6,7), (4,5,7), (3,6,7) with sum 18,17,16,16 respectively. Alternatively, if β 1 = 0, E(T 1 T 0 = 3) = t 0 x = 3 4 = 12 and ( ) n t0 3(4)(4 1) Var(T 1 T 0 = 3) = t 0 v = n The approx. p-value is Pr(T 1 16 T 0 = 3, β 1 = 0) Pr = 8. ( Z ) 2 8 = Pr(Z 1.237) = SydU MSH3 GLM (2015) First semester Dr. J. Chan 205

35 5.7 Regression Diagnostics 1. Pearson Residuals r pi = y i ˆµ i ˆµi (n i ˆµ i )/n i N(0, 1) Observations with r pi > 2 indicate lack of fit. 2. Deviance Residuals [ r di = 2 y i ln ( yi ) ( )] ni y i + (n i y i ) ln ˆµ i n i ˆµ i Observations with r d > 2 indicate lack of fit. Note that Deviance = i r2 di. 3. Leverage Pregibon (1981) developed a weighted hat matrix H = W 1/2 X(X X) 1 X W 1/2 where W is the diagonal matrix with entries w ii = ˆµ i (n i ˆµ i )/n i evaluated at the MLE. Then the variance of the residual to a first order approximation is Var(y i ˆµ i ) (1 h ii )Var(y i ) where h ii is the leverage or diagonal element of H. Note that Y µ = (I H)Y. Then the studentized residual is r si = r pi 1 hii = y i ˆµ i (1 hii )ˆµ i (n i ˆµ i )/n i 4. Cook distance SydU MSH3 GLM (2015) First semester Dr. J. Chan 206

36 Comparing β with β (i) and to avoid iterations, Pregibon (1981) proposed D i = h iir 2 si (1 h ii ) = h ii (y i ˆµ i ) 2 (1 h ii ) 2ˆµ i (n i ˆµ i )/n i as a one-step approximation to Cook s distance based on doing one iteration of the IRLS towards β (i) starting from the complete data estimate β. This is compared with for normal data. D i = h ii r 2 i (1 h ii ) 2 pˆσ 2 SydU MSH3 GLM (2015) First semester Dr. J. Chan 207

MSH3 Generalized linear model

MSH3 Generalized linear model Contents MSH3 Generalized linear model 7 Log-Linear Model 231 7.1 Equivalence between GOF measures........... 231 7.2 Sampling distribution................... 234 7.3 Interpreting Log-Linear models..............

More information

12 Modelling Binomial Response Data

12 Modelling Binomial Response Data c 2005, Anthony C. Brooms Statistical Modelling and Data Analysis 12 Modelling Binomial Response Data 12.1 Examples of Binary Response Data Binary response data arise when an observation on an individual

More information

MSH3 Generalized linear model Ch. 6 Count data models

MSH3 Generalized linear model Ch. 6 Count data models Contents MSH3 Generalized linear model Ch. 6 Count data models 6 Count data model 208 6.1 Introduction: The Children Ever Born Data....... 208 6.2 The Poisson Distribution................. 210 6.3 Log-Linear

More information

Linear Regression Models P8111

Linear Regression Models P8111 Linear Regression Models P8111 Lecture 25 Jeff Goldsmith April 26, 2016 1 of 37 Today s Lecture Logistic regression / GLMs Model framework Interpretation Estimation 2 of 37 Linear regression Course started

More information

Regression models. Generalized linear models in R. Normal regression models are not always appropriate. Generalized linear models. Examples.

Regression models. Generalized linear models in R. Normal regression models are not always appropriate. Generalized linear models. Examples. Regression models Generalized linear models in R Dr Peter K Dunn http://www.usq.edu.au Department of Mathematics and Computing University of Southern Queensland ASC, July 00 The usual linear regression

More information

Generalized linear models

Generalized linear models Generalized linear models Douglas Bates November 01, 2010 Contents 1 Definition 1 2 Links 2 3 Estimating parameters 5 4 Example 6 5 Model building 8 6 Conclusions 8 7 Summary 9 1 Generalized Linear Models

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models Generalized Linear Models - part III Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs.

More information

9 Generalized Linear Models

9 Generalized Linear Models 9 Generalized Linear Models The Generalized Linear Model (GLM) is a model which has been built to include a wide range of different models you already know, e.g. ANOVA and multiple linear regression models

More information

SCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models

SCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models SCHOOL OF MATHEMATICS AND STATISTICS Linear and Generalised Linear Models Autumn Semester 2017 18 2 hours Attempt all the questions. The allocation of marks is shown in brackets. RESTRICTED OPEN BOOK EXAMINATION

More information

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F). STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis 1. Indicate whether each of the following is true (T) or false (F). (a) (b) (c) (d) (e) In 2 2 tables, statistical independence is equivalent

More information

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F). STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis 1. Indicate whether each of the following is true (T) or false (F). (a) T In 2 2 tables, statistical independence is equivalent to a population

More information

ST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples

ST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples ST3241 Categorical Data Analysis I Generalized Linear Models Introduction and Some Examples 1 Introduction We have discussed methods for analyzing associations in two-way and three-way tables. Now we will

More information

Generalized Linear Models. Last time: Background & motivation for moving beyond linear

Generalized Linear Models. Last time: Background & motivation for moving beyond linear Generalized Linear Models Last time: Background & motivation for moving beyond linear regression - non-normal/non-linear cases, binary, categorical data Today s class: 1. Examples of count and ordered

More information

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46 A Generalized Linear Model for Binomial Response Data Copyright c 2017 Dan Nettleton (Iowa State University) Statistics 510 1 / 46 Now suppose that instead of a Bernoulli response, we have a binomial response

More information

1. Hypothesis testing through analysis of deviance. 3. Model & variable selection - stepwise aproaches

1. Hypothesis testing through analysis of deviance. 3. Model & variable selection - stepwise aproaches Sta 216, Lecture 4 Last Time: Logistic regression example, existence/uniqueness of MLEs Today s Class: 1. Hypothesis testing through analysis of deviance 2. Standard errors & confidence intervals 3. Model

More information

STAT 526 Spring Midterm 1. Wednesday February 2, 2011

STAT 526 Spring Midterm 1. Wednesday February 2, 2011 STAT 526 Spring 2011 Midterm 1 Wednesday February 2, 2011 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points

More information

Logistic Regression. James H. Steiger. Department of Psychology and Human Development Vanderbilt University

Logistic Regression. James H. Steiger. Department of Psychology and Human Development Vanderbilt University Logistic Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) Logistic Regression 1 / 38 Logistic Regression 1 Introduction

More information

Binary Response: Logistic Regression. STAT 526 Professor Olga Vitek

Binary Response: Logistic Regression. STAT 526 Professor Olga Vitek Binary Response: Logistic Regression STAT 526 Professor Olga Vitek March 29, 2011 4 Model Specification and Interpretation 4-1 Probability Distribution of a Binary Outcome Y In many situations, the response

More information

Logistic Regressions. Stat 430

Logistic Regressions. Stat 430 Logistic Regressions Stat 430 Final Project Final Project is, again, team based You will decide on a project - only constraint is: you are supposed to use techniques for a solution that are related to

More information

Log-linear Models for Contingency Tables

Log-linear Models for Contingency Tables Log-linear Models for Contingency Tables Statistics 149 Spring 2006 Copyright 2006 by Mark E. Irwin Log-linear Models for Two-way Contingency Tables Example: Business Administration Majors and Gender A

More information

Binary Regression. GH Chapter 5, ISL Chapter 4. January 31, 2017

Binary Regression. GH Chapter 5, ISL Chapter 4. January 31, 2017 Binary Regression GH Chapter 5, ISL Chapter 4 January 31, 2017 Seedling Survival Tropical rain forests have up to 300 species of trees per hectare, which leads to difficulties when studying processes which

More information

8 Nominal and Ordinal Logistic Regression

8 Nominal and Ordinal Logistic Regression 8 Nominal and Ordinal Logistic Regression 8.1 Introduction If the response variable is categorical, with more then two categories, then there are two options for generalized linear models. One relies on

More information

Chapter 14 Logistic Regression, Poisson Regression, and Generalized Linear Models

Chapter 14 Logistic Regression, Poisson Regression, and Generalized Linear Models Chapter 14 Logistic Regression, Poisson Regression, and Generalized Linear Models 許湘伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 10 1 / 29 14.1 Regression Models

More information

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: )

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: ) NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) Categorical Data Analysis (Semester II: 2010 2011) April/May, 2011 Time Allowed : 2 Hours Matriculation No: Seat No: Grade Table Question 1 2 3

More information

Sections 4.1, 4.2, 4.3

Sections 4.1, 4.2, 4.3 Sections 4.1, 4.2, 4.3 Timothy Hanson Department of Statistics, University of South Carolina Stat 770: Categorical Data Analysis 1/ 32 Chapter 4: Introduction to Generalized Linear Models Generalized linear

More information

Single-level Models for Binary Responses

Single-level Models for Binary Responses Single-level Models for Binary Responses Distribution of Binary Data y i response for individual i (i = 1,..., n), coded 0 or 1 Denote by r the number in the sample with y = 1 Mean and variance E(y) =

More information

Generalized Linear Models. stat 557 Heike Hofmann

Generalized Linear Models. stat 557 Heike Hofmann Generalized Linear Models stat 557 Heike Hofmann Outline Intro to GLM Exponential Family Likelihood Equations GLM for Binomial Response Generalized Linear Models Three components: random, systematic, link

More information

Generalized linear models for binary data. A better graphical exploratory data analysis. The simple linear logistic regression model

Generalized linear models for binary data. A better graphical exploratory data analysis. The simple linear logistic regression model Stat 3302 (Spring 2017) Peter F. Craigmile Simple linear logistic regression (part 1) [Dobson and Barnett, 2008, Sections 7.1 7.3] Generalized linear models for binary data Beetles dose-response example

More information

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Statistics 203: Introduction to Regression and Analysis of Variance Course review Statistics 203: Introduction to Regression and Analysis of Variance Course review Jonathan Taylor - p. 1/?? Today Review / overview of what we learned. - p. 2/?? General themes in regression models Specifying

More information

Introduction to the Generalized Linear Model: Logistic regression and Poisson regression

Introduction to the Generalized Linear Model: Logistic regression and Poisson regression Introduction to the Generalized Linear Model: Logistic regression and Poisson regression Statistical modelling: Theory and practice Gilles Guillot gigu@dtu.dk November 4, 2013 Gilles Guillot (gigu@dtu.dk)

More information

Exam Applied Statistical Regression. Good Luck!

Exam Applied Statistical Regression. Good Luck! Dr. M. Dettling Summer 2011 Exam Applied Statistical Regression Approved: Tables: Note: Any written material, calculator (without communication facility). Attached. All tests have to be done at the 5%-level.

More information

STA 450/4000 S: January

STA 450/4000 S: January STA 450/4000 S: January 6 005 Notes Friday tutorial on R programming reminder office hours on - F; -4 R The book Modern Applied Statistics with S by Venables and Ripley is very useful. Make sure you have

More information

Classification. Chapter Introduction. 6.2 The Bayes classifier

Classification. Chapter Introduction. 6.2 The Bayes classifier Chapter 6 Classification 6.1 Introduction Often encountered in applications is the situation where the response variable Y takes values in a finite set of labels. For example, the response Y could encode

More information

Stat 579: Generalized Linear Models and Extensions

Stat 579: Generalized Linear Models and Extensions Stat 579: Generalized Linear Models and Extensions Yan Lu Jan, 2018, week 3 1 / 67 Hypothesis tests Likelihood ratio tests Wald tests Score tests 2 / 67 Generalized Likelihood ratio tests Let Y = (Y 1,

More information

Likelihoods for Generalized Linear Models

Likelihoods for Generalized Linear Models 1 Likelihoods for Generalized Linear Models 1.1 Some General Theory We assume that Y i has the p.d.f. that is a member of the exponential family. That is, f(y i ; θ i, φ) = exp{(y i θ i b(θ i ))/a i (φ)

More information

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS Duration - 3 hours Aids Allowed: Calculator LAST NAME: FIRST NAME: STUDENT NUMBER: There are 27 pages

More information

Regression so far... Lecture 21 - Logistic Regression. Odds. Recap of what you should know how to do... At this point we have covered: Sta102 / BME102

Regression so far... Lecture 21 - Logistic Regression. Odds. Recap of what you should know how to do... At this point we have covered: Sta102 / BME102 Background Regression so far... Lecture 21 - Sta102 / BME102 Colin Rundel November 18, 2014 At this point we have covered: Simple linear regression Relationship between numerical response and a numerical

More information

R Hints for Chapter 10

R Hints for Chapter 10 R Hints for Chapter 10 The multiple logistic regression model assumes that the success probability p for a binomial random variable depends on independent variables or design variables x 1, x 2,, x k.

More information

Lecture 13: More on Binary Data

Lecture 13: More on Binary Data Lecture 1: More on Binary Data Link functions for Binomial models Link η = g(π) π = g 1 (η) identity π η logarithmic log π e η logistic log ( π 1 π probit Φ 1 (π) Φ(η) log-log log( log π) exp( e η ) complementary

More information

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

 M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2 Notation and Equations for Final Exam Symbol Definition X The variable we measure in a scientific study n The size of the sample N The size of the population M The mean of the sample µ The mean of the

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models Generalized Linear Models - part II Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs.

More information

Logistic Regression - problem 6.14

Logistic Regression - problem 6.14 Logistic Regression - problem 6.14 Let x 1, x 2,, x m be given values of an input variable x and let Y 1,, Y m be independent binomial random variables whose distributions depend on the corresponding values

More information

Generalized Estimating Equations

Generalized Estimating Equations Outline Review of Generalized Linear Models (GLM) Generalized Linear Model Exponential Family Components of GLM MLE for GLM, Iterative Weighted Least Squares Measuring Goodness of Fit - Deviance and Pearson

More information

Experimental Design and Statistical Methods. Workshop LOGISTIC REGRESSION. Jesús Piedrafita Arilla.

Experimental Design and Statistical Methods. Workshop LOGISTIC REGRESSION. Jesús Piedrafita Arilla. Experimental Design and Statistical Methods Workshop LOGISTIC REGRESSION Jesús Piedrafita Arilla jesus.piedrafita@uab.cat Departament de Ciència Animal i dels Aliments Items Logistic regression model Logit

More information

Truck prices - linear model? Truck prices - log transform of the response variable. Interpreting models with log transformation

Truck prices - linear model? Truck prices - log transform of the response variable. Interpreting models with log transformation Background Regression so far... Lecture 23 - Sta 111 Colin Rundel June 17, 2014 At this point we have covered: Simple linear regression Relationship between numerical response and a numerical or categorical

More information

Model Selection in GLMs. (should be able to implement frequentist GLM analyses!) Today: standard frequentist methods for model selection

Model Selection in GLMs. (should be able to implement frequentist GLM analyses!) Today: standard frequentist methods for model selection Model Selection in GLMs Last class: estimability/identifiability, analysis of deviance, standard errors & confidence intervals (should be able to implement frequentist GLM analyses!) Today: standard frequentist

More information

Figure 36: Respiratory infection versus time for the first 49 children.

Figure 36: Respiratory infection versus time for the first 49 children. y BINARY DATA MODELS We devote an entire chapter to binary data since such data are challenging, both in terms of modeling the dependence, and parameter interpretation. We again consider mixed effects

More information

Logistic regression. 11 Nov Logistic regression (EPFL) Applied Statistics 11 Nov / 20

Logistic regression. 11 Nov Logistic regression (EPFL) Applied Statistics 11 Nov / 20 Logistic regression 11 Nov 2010 Logistic regression (EPFL) Applied Statistics 11 Nov 2010 1 / 20 Modeling overview Want to capture important features of the relationship between a (set of) variable(s)

More information

Lecture 5: LDA and Logistic Regression

Lecture 5: LDA and Logistic Regression Lecture 5: and Logistic Regression Hao Helen Zhang Hao Helen Zhang Lecture 5: and Logistic Regression 1 / 39 Outline Linear Classification Methods Two Popular Linear Models for Classification Linear Discriminant

More information

STAC51: Categorical data Analysis

STAC51: Categorical data Analysis STAC51: Categorical data Analysis Mahinda Samarakoon April 6, 2016 Mahinda Samarakoon STAC51: Categorical data Analysis 1 / 25 Table of contents 1 Building and applying logistic regression models (Chap

More information

Contents 1. Contents

Contents 1. Contents Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................

More information

BMI 541/699 Lecture 22

BMI 541/699 Lecture 22 BMI 541/699 Lecture 22 Where we are: 1. Introduction and Experimental Design 2. Exploratory Data Analysis 3. Probability 4. T-based methods for continous variables 5. Power and sample size for t-based

More information

STAT 526 Spring Final Exam. Thursday May 5, 2011

STAT 526 Spring Final Exam. Thursday May 5, 2011 STAT 526 Spring 2011 Final Exam Thursday May 5, 2011 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points will

More information

7/28/15. Review Homework. Overview. Lecture 6: Logistic Regression Analysis

7/28/15. Review Homework. Overview. Lecture 6: Logistic Regression Analysis Lecture 6: Logistic Regression Analysis Christopher S. Hollenbeak, PhD Jane R. Schubart, PhD The Outcomes Research Toolbox Review Homework 2 Overview Logistic regression model conceptually Logistic regression

More information

Lecture 12: Effect modification, and confounding in logistic regression

Lecture 12: Effect modification, and confounding in logistic regression Lecture 12: Effect modification, and confounding in logistic regression Ani Manichaikul amanicha@jhsph.edu 4 May 2007 Today Categorical predictor create dummy variables just like for linear regression

More information

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7 Introduction to Generalized Univariate Models: Models for Binary Outcomes EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7 EPSY 905: Intro to Generalized In This Lecture A short review

More information

Generalized linear models

Generalized linear models Generalized linear models Outline for today What is a generalized linear model Linear predictors and link functions Example: estimate a proportion Analysis of deviance Example: fit dose- response data

More information

Administration. Homework 1 on web page, due Feb 11 NSERC summer undergraduate award applications due Feb 5 Some helpful books

Administration. Homework 1 on web page, due Feb 11 NSERC summer undergraduate award applications due Feb 5 Some helpful books STA 44/04 Jan 6, 00 / 5 Administration Homework on web page, due Feb NSERC summer undergraduate award applications due Feb 5 Some helpful books STA 44/04 Jan 6, 00... administration / 5 STA 44/04 Jan 6,

More information

,..., θ(2),..., θ(n)

,..., θ(2),..., θ(n) Likelihoods for Multivariate Binary Data Log-Linear Model We have 2 n 1 distinct probabilities, but we wish to consider formulations that allow more parsimonious descriptions as a function of covariates.

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science

UNIVERSITY OF TORONTO Faculty of Arts and Science UNIVERSITY OF TORONTO Faculty of Arts and Science December 2013 Final Examination STA442H1F/2101HF Methods of Applied Statistics Jerry Brunner Duration - 3 hours Aids: Calculator Model(s): Any calculator

More information

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model Regression: Part II Linear Regression y~n X, 2 X Y Data Model β, σ 2 Process Model Β 0,V β s 1,s 2 Parameter Model Assumptions of Linear Model Homoskedasticity No error in X variables Error in Y variables

More information

STAT 7030: Categorical Data Analysis

STAT 7030: Categorical Data Analysis STAT 7030: Categorical Data Analysis 5. Logistic Regression Peng Zeng Department of Mathematics and Statistics Auburn University Fall 2012 Peng Zeng (Auburn University) STAT 7030 Lecture Notes Fall 2012

More information

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3 STA 303 H1S / 1002 HS Winter 2011 Test March 7, 2011 LAST NAME: FIRST NAME: STUDENT NUMBER: ENROLLED IN: (circle one) STA 303 STA 1002 INSTRUCTIONS: Time: 90 minutes Aids allowed: calculator. Some formulae

More information

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION. ST3241 Categorical Data Analysis. (Semester II: ) April/May, 2011 Time Allowed : 2 Hours

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION. ST3241 Categorical Data Analysis. (Semester II: ) April/May, 2011 Time Allowed : 2 Hours NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION Categorical Data Analysis (Semester II: 2010 2011) April/May, 2011 Time Allowed : 2 Hours Matriculation No: Seat No: Grade Table Question 1 2 3 4 5 6 Full marks

More information

Generalized Linear Models

Generalized Linear Models York SPIDA John Fox Notes Generalized Linear Models Copyright 2010 by John Fox Generalized Linear Models 1 1. Topics I The structure of generalized linear models I Poisson and other generalized linear

More information

Review: what is a linear model. Y = β 0 + β 1 X 1 + β 2 X 2 + A model of the following form:

Review: what is a linear model. Y = β 0 + β 1 X 1 + β 2 X 2 + A model of the following form: Outline for today What is a generalized linear model Linear predictors and link functions Example: fit a constant (the proportion) Analysis of deviance table Example: fit dose-response data using logistic

More information

Chapter 4: Generalized Linear Models-I

Chapter 4: Generalized Linear Models-I : Generalized Linear Models-I Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM [Acknowledgements to Tim Hanson and Haitao Chu] D. Bandyopadhyay

More information

Today. HW 1: due February 4, pm. Aspects of Design CD Chapter 2. Continue with Chapter 2 of ELM. In the News:

Today. HW 1: due February 4, pm. Aspects of Design CD Chapter 2. Continue with Chapter 2 of ELM. In the News: Today HW 1: due February 4, 11.59 pm. Aspects of Design CD Chapter 2 Continue with Chapter 2 of ELM In the News: STA 2201: Applied Statistics II January 14, 2015 1/35 Recap: data on proportions data: y

More information

Clinical Trials. Olli Saarela. September 18, Dalla Lana School of Public Health University of Toronto.

Clinical Trials. Olli Saarela. September 18, Dalla Lana School of Public Health University of Toronto. Introduction to Dalla Lana School of Public Health University of Toronto olli.saarela@utoronto.ca September 18, 2014 38-1 : a review 38-2 Evidence Ideal: to advance the knowledge-base of clinical medicine,

More information

Generalized Linear Models: An Introduction

Generalized Linear Models: An Introduction Applied Statistics With R Generalized Linear Models: An Introduction John Fox WU Wien May/June 2006 2006 by John Fox Generalized Linear Models: An Introduction 1 A synthesis due to Nelder and Wedderburn,

More information

Homework 1 Solutions

Homework 1 Solutions 36-720 Homework 1 Solutions Problem 3.4 (a) X 2 79.43 and G 2 90.33. We should compare each to a χ 2 distribution with (2 1)(3 1) 2 degrees of freedom. For each, the p-value is so small that S-plus reports

More information

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent Latent Variable Models for Binary Data Suppose that for a given vector of explanatory variables x, the latent variable, U, has a continuous cumulative distribution function F (u; x) and that the binary

More information

Generalized linear models

Generalized linear models Generalized linear models Søren Højsgaard Department of Mathematical Sciences Aalborg University, Denmark October 29, 202 Contents Densities for generalized linear models. Mean and variance...............................

More information

Explanatory variables are: weight, width of shell, color (medium light, medium, medium dark, dark), and condition of spine.

Explanatory variables are: weight, width of shell, color (medium light, medium, medium dark, dark), and condition of spine. Horseshoe crab example: There are 173 female crabs for which we wish to model the presence or absence of male satellites dependant upon characteristics of the female horseshoe crabs. 1 satellite present

More information

Econometrics II. Seppo Pynnönen. Spring Department of Mathematics and Statistics, University of Vaasa, Finland

Econometrics II. Seppo Pynnönen. Spring Department of Mathematics and Statistics, University of Vaasa, Finland Department of Mathematics and Statistics, University of Vaasa, Finland Spring 2018 Part III Limited Dependent Variable Models As of Jan 30, 2017 1 Background 2 Binary Dependent Variable The Linear Probability

More information

STA216: Generalized Linear Models. Lecture 1. Review and Introduction

STA216: Generalized Linear Models. Lecture 1. Review and Introduction STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general

More information

Chapter 20: Logistic regression for binary response variables

Chapter 20: Logistic regression for binary response variables Chapter 20: Logistic regression for binary response variables In 1846, the Donner and Reed families left Illinois for California by covered wagon (87 people, 20 wagons). They attempted a new and untried

More information

Section 4.6 Simple Linear Regression

Section 4.6 Simple Linear Regression Section 4.6 Simple Linear Regression Objectives ˆ Basic philosophy of SLR and the regression assumptions ˆ Point & interval estimation of the model parameters, and how to make predictions ˆ Point and interval

More information

Categorical Variables and Contingency Tables: Description and Inference

Categorical Variables and Contingency Tables: Description and Inference Categorical Variables and Contingency Tables: Description and Inference STAT 526 Professor Olga Vitek March 3, 2011 Reading: Agresti Ch. 1, 2 and 3 Faraway Ch. 4 3 Univariate Binomial and Multinomial Measurements

More information

Stat 5102 Final Exam May 14, 2015

Stat 5102 Final Exam May 14, 2015 Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions

More information

Poisson Regression. The Training Data

Poisson Regression. The Training Data The Training Data Poisson Regression Office workers at a large insurance company are randomly assigned to one of 3 computer use training programmes, and their number of calls to IT support during the following

More information

Two Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 26 May :00 16:00

Two Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 26 May :00 16:00 Two Hours MATH38052 Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER GENERALISED LINEAR MODELS 26 May 2016 14:00 16:00 Answer ALL TWO questions in Section

More information

Generalized Linear Models Introduction

Generalized Linear Models Introduction Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,

More information

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1 Parametric Modelling of Over-dispersed Count Data Part III / MMath (Applied Statistics) 1 Introduction Poisson regression is the de facto approach for handling count data What happens then when Poisson

More information

Chapter 1 Statistical Inference

Chapter 1 Statistical Inference Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations

More information

LOGISTIC REGRESSION Joseph M. Hilbe

LOGISTIC REGRESSION Joseph M. Hilbe LOGISTIC REGRESSION Joseph M. Hilbe Arizona State University Logistic regression is the most common method used to model binary response data. When the response is binary, it typically takes the form of

More information

Lecture 14: Introduction to Poisson Regression

Lecture 14: Introduction to Poisson Regression Lecture 14: Introduction to Poisson Regression Ani Manichaikul amanicha@jhsph.edu 8 May 2007 1 / 52 Overview Modelling counts Contingency tables Poisson regression models 2 / 52 Modelling counts I Why

More information

Modelling counts. Lecture 14: Introduction to Poisson Regression. Overview

Modelling counts. Lecture 14: Introduction to Poisson Regression. Overview Modelling counts I Lecture 14: Introduction to Poisson Regression Ani Manichaikul amanicha@jhsph.edu Why count data? Number of traffic accidents per day Mortality counts in a given neighborhood, per week

More information

Various Issues in Fitting Contingency Tables

Various Issues in Fitting Contingency Tables Various Issues in Fitting Contingency Tables Statistics 149 Spring 2006 Copyright 2006 by Mark E. Irwin Complete Tables with Zero Entries In contingency tables, it is possible to have zero entries in a

More information

Logistic Regression 21/05

Logistic Regression 21/05 Logistic Regression 21/05 Recall that we are trying to solve a classification problem in which features x i can be continuous or discrete (coded as 0/1) and the response y is discrete (0/1). Logistic regression

More information

Outline of GLMs. Definitions

Outline of GLMs. Definitions Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density

More information

Matched Pair Data. Stat 557 Heike Hofmann

Matched Pair Data. Stat 557 Heike Hofmann Matched Pair Data Stat 557 Heike Hofmann Outline Marginal Homogeneity - review Binary Response with covariates Ordinal response Symmetric Models Subject-specific vs Marginal Model conditional logistic

More information

36-463/663: Multilevel & Hierarchical Models

36-463/663: Multilevel & Hierarchical Models 36-463/663: Multilevel & Hierarchical Models (P)review: in-class midterm Brian Junker 132E Baker Hall brian@stat.cmu.edu 1 In-class midterm Closed book, closed notes, closed electronics (otherwise I have

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Chapter 12.8 Logistic Regression

Chapter 12.8 Logistic Regression Chapter 12.8 Logistic Regression Logistic regression is an example of a large class of regression models called generalized linear models (GLM) An Observational Case Study: The Donner Party (Gayson, D.K.,

More information

Multinomial Logistic Regression Models

Multinomial Logistic Regression Models Stat 544, Lecture 19 1 Multinomial Logistic Regression Models Polytomous responses. Logistic regression can be extended to handle responses that are polytomous, i.e. taking r>2 categories. (Note: The word

More information

Loglinear models. STAT 526 Professor Olga Vitek

Loglinear models. STAT 526 Professor Olga Vitek Loglinear models STAT 526 Professor Olga Vitek April 19, 2011 8 Can Use Poisson Likelihood To Model Both Poisson and Multinomial Counts 8-1 Recall: Poisson Distribution Probability distribution: Y - number

More information

COMPLEMENTARY LOG-LOG MODEL

COMPLEMENTARY LOG-LOG MODEL COMPLEMENTARY LOG-LOG MODEL Under the assumption of binary response, there are two alternatives to logit model: probit model and complementary-log-log model. They all follow the same form π ( x) =Φ ( α

More information

Chapter 4: Generalized Linear Models-II

Chapter 4: Generalized Linear Models-II : Generalized Linear Models-II Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM [Acknowledgements to Tim Hanson and Haitao Chu] D. Bandyopadhyay

More information

Exercise 5.4 Solution

Exercise 5.4 Solution Exercise 5.4 Solution Niels Richard Hansen University of Copenhagen May 7, 2010 1 5.4(a) > leukemia

More information