Stat 579: Generalized Linear Models and Extensions

Size: px
Start display at page:

Download "Stat 579: Generalized Linear Models and Extensions"

Transcription

1 Stat 579: Generalized Linear Models and Extensions Yan Lu Jan, 2018, week 3 1 / 67

2 Hypothesis tests Likelihood ratio tests Wald tests Score tests 2 / 67

3 Generalized Likelihood ratio tests Let Y = (Y 1, Y 2,, Y n ), where Y 1, Y 2,, Y n have joint pdf f (y, θ) for θ Ω, and consider the hypothesis H 0 : θ Ω 0 v.s. H α : θ Ω Ω 0 The generalized likelihood ratio (GLR) is defined by λ(y) = max θ Ω 0 f (y; θ) max θ Ω f (y; θ) = f (y; ˆθ 0 ) f (y; ˆθ) ˆθ denote the usual MLE of θ ˆθ 0 denotes the MLE under the restriction that H 0 is true. if y f (y; θ 1,, θ k ), then under H 0 : (θ 1, θ 2,, θ r ) = (θ 10, θ 20,, θ r0 ), r < k. Approximately, for large n, 2logλ(y) χ 2 (r). An appropriate size α test is to reject H 0 if 2logλ(y) χ 2 1 α(r) 3 / 67

4 Example For the binary outcome, if the hypothesis is H 0 : p = p 0 v.s p p 0 then ( ) n n l(ˆµ 0 ) = log likelihood = y i log(p 0 ) + n y i log(1 p 0 ) i=1 i=1 ( ) n n l(ˆµ) = log likelihood = y i log(ˆp) + n y i log(1 ˆp) i=1 i=1 λ(y) = exp(l(ˆµ 0 ))/exp(l(ˆµ)), 2logλ(y) χ / 67

5 Measuring goodness of fit test, Saturated Model l(y; φ, µ) = n { yj θ j (µ) b(µ) j=1 a(φ) Fit the model by ML, let ˆµ is the MLE of µ } + c(y j, φ) g(µ i ) = x iβ = β 0 + x i1 β x i(p 1) β p 1 the maximized value of the log-likelihood is l(y; φ, ˆµ) = { } n yj θ j (ˆµ) b(ˆµ) j=1 + c(y j, φ) a(φ) Now fit the alternate model g(µ i ) = x iβ + δ τ = β 0 + x i1 β x i(p 1) β p 1 +τ 0 + δ i1 τ δ i(r 1) τ r 1 where for now let r = n p. We have n observations and n regression parameters. 5 / 67

6 g(µ i ) = β 0 + x i1 β x i(p 1) β p 1 +τ 0 + δ i1 τ δ i(r 1) τ r 1 where for now let r = n p -we have n observations and n regression parameters. If there are no linear dependencies among the predictors then this is the so-called saturated model, which places no constraints on g(µ i ) and consequently no constraints on µ i. Let µ be the MLE of µ for the saturated model, we have µ = y. score equation under the saturated model is X W (y µ) = 0 where X is the n n extended design matrix with rows [x i, δ i], by computation, X is invertible, as are W and XW (y µ) = 0, y µ = 0 µ = y 6 / 67

7 The likelihood ratio test (LRT) statistic is the ratio of the likelihood at the hypothesized parameter values (reduced model) to the likelihood of the data (saturated model) at the MLE(s). λ(y) = likelihood of reduced model likelihood of saturated model g(µ i ) = β 0 + x i1 β x i(p 1) β p 1 +τ 0 + δ i1 τ δ i(r 1) τ r 1 The likelihood ratio statistics for testing H 0 : τ = 0 is 2logλ(y) = 2[l(y; φ, y) l(y; φ, ˆµ)] Because the alternative model is saturated, this is also viewed as a measure of how well the null model g(µ i ) = x iβ is fitted by the data. 7 / 67

8 Deviance { } yj θ j b(θ j ) f (y j θ j, φ) = exp + c(y j, φ) a(φ) Let ˆµ β be the MLE at H 0, and µ β be the MLE for the saturated model, and let a j (φ) = φ/w j. 2logλ(y) = 1 φ n 2w j [(θ j (y) θ j (ˆµ))y j (b j (y) b j (ˆµ))] j=1 = 1 D(y, ˆµ) φ D(y, ˆµ) is called deviance 1 D(y, ˆµ) is called scaled deviance, usually denoted as φ D (y, ˆµ) 8 / 67

9 Example: Normal distribution deviance Normal distribution { } yµ 0.5µ f (y µ, σ 2 2 ) = exp σ 2 + c(y, σ) θ = µ, b(θ) = 0.5µ 2, a(φ) = σ 2 /w j, w j = 1, φ = σ 2 n { ( 1 Deviance = 2 (y j ˆµ j )y j 2 y j 2 1 )} 2 ˆµ2 j = = = j=1 n j=1 n j=1 { 2 yj 2 ˆµ j y j 1 2 y j } 2 ˆµ2 j { y 2 j n (y j ˆµ j ) 2 j=1 2ˆµ j y j + ˆµ 2 } j = sum of residuals 9 / 67

10 Distribution and Deviance Distribution Binomial Poisson Normal Deviance 2 ( ) ( ) j y yj nj y j jlog + (n j y j )log n j ˆµ j n j n j ˆµ j 2 { ( ) } yj j y j log (y j ˆµ j ) w j ˆµ j j (y j ˆµ j ) 2 w j the larger the deviance, the poorer the fit of the model, large values of D(y; ˆµ) suggest a general lack of fit of the model model fits perfectly, D(y; ˆµ) = 0 10 / 67

11 Remarks: 1. The standard Poisson and binomial have φ = 1, deviance =scaled deviance 2. In certain settings, D (y, ˆµ) χ 2 n p, number of parameters difference between null and saturated model -for normal, this result is exact, but of not much practical use, since don t know σ 2 - for binomial model, y j Bin(n j, µ j ), j = 1, 2,, n this assumes n j large, and the number of binomial observations, n fixed. -for Poisson model, y j Poisson(µ j ), j = 1, 2,, n, this requires µ i large, n fixed -under suitable conditions for the Binomial and Poisson models, this results can be used to test the adequacy of the model by using the p-values p-value = P(χ 2 n p > D obs ) where D obs is the observed values of scaled and raw deviances (i.e, φ = 1). 11 / 67

12 3. Even if the approximation breaks down, we can show that E(D (y, ˆµ)) n p Since large values of D (y, ˆµ) suggest lack-of-fit, many researcher recommend comparing D (y, ˆµ) to n p to provide a rough idea of lack-of-fit, D (y, ˆµ) n p < 1, no evidence of lack of fit D (y, ˆµ) n p >> 1, some suggestion of lack of fit However, there is no accepted cutoff for how much greater than 1 the scaled deviance must be to indicate lack-of-fit. 12 / 67

13 4. Scaled deviance D provides information on whether the model fits the data, while tests on regression coefficients assess the significance of effects assuming the model fits. 5. Alternative GOF, generalized pearson statistic X 2 = n j=1 w j (y j ˆµ j ) 2 V (ˆµ j ) Scaled pearson statistic X 2 = X 2 These are used analogously to the deviance and scaled deviance. φ 13 / 67

14 Examples of Pearson statistics For poisson distribution, y Poisson(µ) f (y) = P(Y = y) = e µ µ y /y! = exp {ylogµ µ logy!} θ = logµ, φ = 1, w j = 1, b(θ) = µ = e θ, b (θ) = e θ = µ V (y) = E(y). X 2 = n j=1 w j (y j ˆµ j ) 2 V (ˆµ j ) = n (y j ˆµ j ) 2 j=1 ˆµ j X 2 reduces to the usual Pearson statistics 14 / 67

15 Comparing models Model (1): smaller model (reduced model), g(µ i ) = x i β Model (2): larger model (full model), g(µ i ) = x i β + δ i τ The alternative larger model is not necessary the saturated model, i.e, p + r < n, the number of parameters in Model (2) can be less than number of observations n. Assume φ is fixed, the likelihood ratio test for comparing (1) and (2), is equivalent to test H 0 : τ = 0 D (ˆµ 2, ˆµ 1 ) = (D (y, ˆµ 1 ) D (y, ˆµ 2 ) χ 2 r where ˆµ 1 and ˆµ 2 are MLEs under models (1) and (2) respectively, r is the number of parameters in τ, that is, the df of smaller model - df large model 15 / 67

16 Wald Tests Wald test: The Wald test statistic is a function of the difference in the MLE and the hypothesized value, normalized by an estimate of the standard deviation of the MLE. binary example, for large n, W χ 2 1 In general { L {( ˆβ ˆτ ) ( β τ W = (ˆp ˆp 0) 2 ˆp(1 ˆp)/n )}} { L I 1 ( ˆβ ˆα ) } 1 {( ˆβ L L ˆτ ) ( β τ )} where l = rank(l) χ 2 l 16 / 67

17 For example: testing H 0 : β i = 0 or H 0 : L β = 0 L = [0, 0,, 1, 0,, 0], ith element is 1. testing H 0 : β i = β j or H 0 : β i β j = 0 or H 0 : L β = 0 L = [0, 0,, 1, 0,, 1, 0, 0], ith element and jth element are 1 and -1 respectively. Several linear restrictions H 0 : β 2 + β 3 = 1, β 4 + β 6 = 0, β 5 + β 6 = 0 β L = β 2, β = , c = β 6 L β = c, rank(l) = / 67

18 Score tests If the MLE equals the hypothesized value, p 0, then p 0 would maximize the likelihood and U(p 0 ) = 0. The score statistic measures how far from zero the score function is when evaluated at the null hypothesis. The test statistic for the binary outcome example is S = U(p 0 ) 2 /I (p 0 ) S χ 2 1 LRT, Wald, score tests are asymptotically equivalent. 18 / 67

19 Example: logistic regression with admission data logit(µ i ) = β 0 + β 1 x i1 + + β p 1 x i(p 1) = x iβ l = µ i = E(y i ) = exp(x i β) 1 + exp(x i β) n ( exp(x ) ( ) y i log i β) n ( 1 + exp(x i β) + n y i log 1 exp(x i β) ) 1 + exp(x i β) i=1 i=1 The p score functions can not be solved analytically. It is common to use a numerical algorithm, such as the Newton-Raphson algorithm to obtain the MLEs. The information matrix I is a p p matrix of the partial second derivatives with respect to the parameters, the inverted information matrix is the covariance matrix for ˆβ. 19 / 67

20 Testing a single logistic regression coefficient To test a single logistic regression coefficient, we will use the Wald test, ˆβ j β j0 se( ˆ ˆβ j ) N(0, 1) se( ˆ ˆβ j ) is calculated by taking the inverse of the estimated information matrix. -This value is given to you in the R output for β j0 = 0 As in linear regression, this test is conditional on all other coefficients being in the model. 20 / 67

21 Fitting glm in R, we have the following results myfit0 <- glm(admit ~ gpa, data = ex.data, family = "binomial") summary(myfit0) Estimate Std. Error z value Pr(> z ) (Intercept) e-05 *** gpa *** The fitted model is logit(µ i ) = gpa i The column labelled z value is the Wald test statistic = /0.2989, since p-value << 0, reject H 0 : β 1 = 0, conclude that GPA has an significant effect on log odds of admission. 21 / 67

22 Confidence intervals for the coefficients and the odds ratios logit(µ i ) = β 0 + β 1 x i1 + + β p 1 x i(p 1) = x iβ A (1 α) 100% confidence interval for β j, j = 0, 1,, p 1 can be calculated as ˆβ j ± Z 1 α/2 se( ˆ ˆβ j ) The (1 α) 100% confidence interval for the odds ratio over a one unit change in x j is [ ] exp( ˆβ j Z 1 α/2 se( ˆ ˆβ j )), exp( ˆβ j + Z 1 α/2 se( ˆ ˆβ j )) 22 / 67

23 Example Fit admission status with gre, gpa and rank Coefficients: ## Estimate Std. Error z value Pr(> z ) ## (Intercept) *** ## gre * ## gpa * ## rank * ## rank *** ## rank *** odds ratio with one unit change in gpa is exp( ) = % CI of odds ratio for one unit change in gpa is [exp( ), exp( )] = [e , e ] = [1.1661, ] 23 / 67

24 exp(cbind(or = coef(myfit), confint(myfit))) ## Waiting for profiling to be done... ## OR 2.5 % 97.5 % ## (Intercept) ## gre ## gpa ## rank ## rank ## rank / 67

25 Testing a single logistic regression variable using LRT x 2 = logit(µ i ) = β 0 + β 1 gre i + β 2 gpa i + β 3 x 2i + β 4 x 3i + β 5 x 4i { { { 1 if rank 2 1 if rank 3 1 if rank 4 x 0 otherwise 3 = x 0 otherwise 4 = 0 otherwise want to test the effect of variable rank, i.e. H 0 : β 3 = β 4 = β 5 = 0 model under the null hypothesis is logit(µ i ) = β 0 + β 1 gre i + β 2 gpa i 2logλ(y) = 2(l(ˆβ H 0 ) l(ˆβ H α )) need to know both l(ˆβ H 0 ) and l(ˆβ H α ) fit two models: the full model with gre, gpa and rank the reduced model under H 0, with only gre and gpa. 25 / 67

26 Then l(ˆβ H 0 ) is the log-likelihood from the model under H 0, and l(ˆβ H α ) is the log-likelihood from the full model. 2logλ(y) χ / 67

27 Reduced model with gre and gpa myfit2<-glm(admit ~ gre + gpa, data = ex.data, family = "binomial") summary(myfit2) ## ## Coefficients: ## Estimate Std. Error z value Pr(> z ) ## (Intercept) e-06 *** ## gre * ## gpa * ## --- ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: on 399 degrees of freedom ## Residual deviance: on 397 degrees of freedom ## AIC: / 67

28 Full model with gre, gpa and rank myfit <- glm(admit ~ gre + gpa + rank, data = ex.data, family = "binomial") summary(myfit) ## ## Coefficients: ## Estimate Std. Error z value Pr(> z ) ## (Intercept) *** ## gre * ## gpa * ## rank * ## rank *** ## rank *** ## Null deviance: on 399 degrees of freedom ## Residual deviance: on 394 degrees of freedom ## AIC: / 67

29 Compare two models the 2log λ is listed as the residual deviance from the output of summary(). For the full model, 2logλ = For the reduced model, 2logλ = Deviance = = > = χ (3) - reject the null hypothesis, conclude that reduced model is not adequate. anova(myfit, myfit2) ## Analysis of Deviance Table ## Model 1: admit ~ gre + gpa + rank ## Model 2: admit ~ gre + gpa ## Resid. Df Resid. Dev Df Deviance ## ## qchisq(0.95,3) ## [1] pchisq(21.826,3,lower.tail = FALSE) ## [1] e / 67

30 Testing groups of variables using the LRT Suppose instead of testing just one variable, we wanted to test a group of variables. This follows naturally from the likelihood ratio test. Let s look at it by example. logit(µ i ) = β 0 + β 1 gre i + β 2 gpa i + β 3 x 2i + β 4 x 3i + β 5 x 4i want to test H 0 : β 1 = β 2 = β 3 = β 4 = β 5 = 0 versus the full model. 30 / 67

31 Reduced model: intercept model myfit0<-glm(admit ~ 1, data = ex.data, family = "binomial") summary(myfit0) ## Coefficients: ## Estimate Std. Error z value Pr(> z ) ## (Intercept) e-12 *** ## (Dispersion parameter for binomial family taken to be 1) ## Null deviance: on 399 degrees of freedom ## Residual deviance: on 399 degrees of freedom ## AIC: Notice that null deviance and residual deviance are the same, since we didn t use any x information in the modeling. 31 / 67

32 Compare the intercept model with full model anova(myfit0,myfit,test="chisq") ## Analysis of Deviance Table ## ## Model 1: admit ~ 1 ## Model 2: admit ~ gre + gpa + rank ## Resid. Df Resid. Dev Df Deviance Pr(>Chi) ## ## e-08 *** Reject the reduced model, in favor of the full model. df = 5 32 / 67

33 Model selection upper<-formula(~gre+gpa+rank,data=ex.data) model.aic = step(myfit0, scope=list(lower= ~., upper= upper)) ## Start: AIC= ## admit ~ 1 ## ## Df Deviance AIC ## + rank ## + gre ## + gpa ## <none> The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. AIC provides a means for model selection. 33 / 67

34 ## Step: AIC= ## admit ~ rank + gpa ## ## Df Deviance AIC ## + gre ## <none> ## - gpa ## - rank ## ## Step: AIC= ## admit ~ rank + gpa + gre ## ## Df Deviance AIC ## <none> ## - gre ## - gpa ## - rank / 67

35 The smallest AIC = , with variables rank, gpa and gre The second smallest one with AIC =472.88, with variables rank and gpa By model comparison for these two models, we would like to choose the full model with rank, gpa and gre. 35 / 67

36 Wald test #test that the coefficient for rank=2 is equal to the coefficient for rank=3 l <- cbind(0, 0, 0, 1, -1, 0) wald.test(b = coef(myfit), Sigma = vcov(myfit), L = l) ## Wald test: ## ## ## Chi-squared test: ## X2 = 5.5, df = 1, P(> X2) = Since p-value for the test is 0.019, conclude that the coefficient for rank=2 is not equal to the coefficient for rank=3, or there is a significant difference between the effect on log odds of admission from rank 2 and rank 3 university applicants. 36 / 67

37 Assessment of model fit Model selection Residuals: can be useful for identifying potential outliers (observations not well fit by the model) or misspecified models. Residuals not very useful in logistic regression. -Raw residual Deviance residuals -Pearson residuals Influence Cook s distance: measures the influent of case i on all of the fitted g i s Leverage Prediction 37 / 67

38 Residuals 1. Raw residuals: y j ˆµ j, these are called response residuals for GLM s. Since the variance of the response is not constant for most GLM s we need some modification. 2. Deviance residuals d j The deviance residual for the jth observation d j is the signed square root of the contribution of the jth case to the sum for the model deviance. n d j = sign(y j ˆµ j ) dj 2, D(y, ˆµ) = useful for determining if individual points are not well fit by the model j=1 you can get the deviance residuals using the function residuals() in R d 2 j 38 / 67

39 3. Pearson residuals Γ j wj Γ j = V (ˆµ j ) (y j ˆµ j ) X 2 = n j=1 Γ2 j Example: for poisson distribution, y Poisson(µ) f (y) = P(Y = y) = e µ µ y /y! = exp {ylogµ µ logy!} θ = logµ, φ = 1, w j = 1, b(θ) = µ = e θ, b (θ) = e θ = µ V (µ j ) = µ j, V (ˆµ j ) = e ˆθ j Pearson residual: Γ j = (y j ˆµ j ) ˆµj Recall that Deviance for poisson is 2 { ( ) } yj y j log (y j ˆµ j ) w j ˆµ j j { ( ) } yj d j = sign(y j ˆµ j ) 2 y j log (y j ˆµ j ) ˆµ j 39 / 67

40 Example: logistic regression µ i log = 1 µ ˆβ 0 + ˆβ 1 x i1 + ˆβ 2 x i2 i ˆµ i : fitted probabilities raw residual: y i ˆµ i y i ˆµ i Pearson residuals: Γ i = ˆµi (1 ˆµ i ) this is based on the idea of subtracting off the mean and dividing by the standard deviation -if we replace ˆµ i by µ i, then Γ i has mean 0 and variance 1. Deviance residuals: based on the contribution of each point to the likelihood For logistic regression, l = { } n i=1 y i logˆµ i + (1 y i )logˆ(1 µ i ) - { } d j = sign(y j ˆµ j ) 2 y i logˆµ i + (1 y i )logˆ(1 µ i ) if y i = 1, sign(y j ˆµ j ) = 1 -if y i = 0, sign(y j ˆµ j ) = 1 40 / 67

41 Each of these type of residuals can be squared and added together to create an (residual sum of squares) RSS-like statistic -Deviance: D = n i=1 d i 2 -Pearson statistic: X 2 = n i=1 Γ2 i 41 / 67

42 4. Scaled Pearson and Deviance residuals Γ j φ = y j ˆµ j V (ˆµ i ) φ w j Recall Var(y) = b (θ)φ = V (θ)φ w w the scaled Pearson residual centers and scales y j by its estimated mean and standard deviation Hence, the scaled Pearson residuals are standardized. Γ j d j, φ φ both scaled Pearson and Deviance residuals are approximately having mean 0 and variance 1 42 / 67

43 D (y; ˆµ) = 1 φ D(y, ˆµ) = 1 φ E(D (y; ˆµ)) n p ( ) dj Var n p = 1 p/n φ n On average, this is less than 1, but not too much if p is small relative to n j d 2 j 43 / 67

44 5. Standardized Pearson and Deviance residuals Γ pj = Γ j φ(1 hjj ), Γ D j = d j φ(1 hjj ) this adjust the scaled residuals to have mean 0 and variance 1, h jj is the jth case leverage, defined as the diagonal elements of the hat matrix H = W 1 2 X(X WX) 1 X W 1 2 -W 1 2 is the diagonal matrix with diagonal element w ii note that ˆµ i Hy Generally speaking, the standardized deviance residuals tend to be preferable because they are more symmetric than the standardized Pearson residuals, but both are commonly used 44 / 67

45 6. Studentized deleted residuals recall that in linear regression there were a number of diagnostic measures based on the idea of leaving observation i out, refitting the model, and seeing how various things changed (residual, coefficient, estimates, fitted values) The same idea can be extended to generalized linear models Γ pj = Γ j, Γ Dj = φ ( j) (1 h jj ) d j φ ( j) (1 h jj ) Studentized residuals less than -2 and greater than +2 deserve closer inspection. 45 / 67

46 7. Outliers A primary use of residuals is in detecting outliers. -observations whose values deviate from the expected range and produce extremely large residuals What is an outlier for 0, 1 data? difficult to claim that seeing either of 1 or 0 constitutes an outlier. too many 0s or 1s in situations where we would not expect them (for example: too many 1s that we think have a small p i ), this usually suggest a lack of fit. perfectly reasonable observations can have unusually large residuals 46 / 67

47 Influential data, if removing the observation substantially changes the estimate of coefficients or fitted probabilities An observation with an extreme value on a predictor variable is called a point with high leverage. Leverage is a measure of how far an independent variable deviates from its mean. In fact, the leverage indicates the geometric extremeness of an observation in the multi-dimensional covariate space. -These leverage points can have an unusually large effect on the estimate of logistic regression coefficients Leverages greater than 2 h or 3 h cause concerns, where h = p/n 47 / 67

48 plot(hatvalues(myfit)) hatvalues(myfit) Index 48 / 67

49 > highleverage <- which(hatvalues(myfit) >.045) #0.45 = 3*p/n = 3*6/400 > hatvalues(myfit)[highleverage] > ex.data[373,] admit gre gpa rank > myfit$fit[373] > mgre > mgpa / 67

50 8. Cook s distance If ˆβ is the MLE of β under the model g(µ i ) = x iβ and ˆβ ( j) is the MLE based on the data but holding out the jth observation, then cooks distance for case j is c k = 1 p (ˆβ ˆβ ( j) ) [ Var(ˆβ)] 1 (ˆβ ˆβ ( j) ) = 1 p (ˆβ ˆβ ( j) ) X ŴX(ˆβ ˆβ ( j) ) Some package doesn t scale c j by p. 50 / 67

51 plot(cooks.distance(myfit)) cooks.distance(myfit) Index 51 / 67

52 > max(cooks.distance(myfit)) [1] > highcook <- which((cooks.distance(myfit)) >.05) #0.05 is simply a very small critical number in $F$ distribution > cooks.distance(myfit)[highcook] named numeric(0) 52 / 67

53 Comments: In a binomial setup where all n i are big the standardized deviance residuals should be closed to Gaussian. The normal probability plot can be used to check this. In a Poisson setup where the counts are big the standardized deviance residuals should be closed to Gaussian. The normal probability plot can be used to check this. In a binomial setup where x i (number of successes) are very small in some of the groups numerical problems sometimes occur in the estimation. This is often seen in very large standard errors of the parameter estimates. 53 / 67

54 Residuals are less informative for logistic regression than they are for linear regression: yes/no (1 or 0) outcomes contain less information than continuous ones the fact that the adjusted response depends on the fit hampers our ability to use residuals as external checks on the model We are making fewer distributional assumptions in logistic regression, so there is no need to inspect residuals for, say, skewness or non constant variance Issues of outliers and influential observations are just as relevant for logistic regression and GLM models as they are for linear regression If influential observations are present, it may or may not be appropriate to change the model, but you should at least understand why some observations are so influential 54 / 67

55 Prediction Fitted probabilities: ###prediction, fitted probabilities myfit$fit[1:20] #fitted probabilities ## ## ## ## ## ## / 67

56 Predicted probabilities: mgre<-tapply(ex.data$gre, ex.data$rank, mean) # mean of gre by rank mgpa<-tapply(ex.data$gpa, ex.data$rank, mean) # mean of gpa by rank newdata1 <- with(ex.data, data.frame(gre = mgre, gpa = mgpa, rank = factor(1:4))) newdata1 ## gre gpa rank ## ## ## ## / 67

57 newdata1$rankp <- predict(myfit, newdata = newdata1, type = "response") newdata1 ## gre gpa rank rankp ## ## ## ## The predicted probability of being accepted into a graduate program is for students from the highest prestige undergraduate institutions (rank= 1), with gre = and gpa= / 67

58 Translate the estimated probabilities into a predicted outcome 1. Use 0.5 as a cutoff. if ˆµ i for a new observation is greater than 0.5, its predicted outcome is y = 1. - if ˆµ i for a new observation is less than or equal to 0.5, its predicted outcome is y = 0. This approach is reasonable when (a) it is equally likely in the population of interest that the outcomes 0 and 1 will occur, and (b) the costs of incorrectly predicting 0 and 1 are approximately the same. 58 / 67

59 2. Find the best cutoff for the data set on which the logistic regression model is based. we evaluate different cutoff values and for each cutoff value, calculate the proportion of observations that are incorrectly predicted. select the cutoff value that minimizes the proportion of incorrectly predicted outcomes. This approach is reasonable when (a) the data set is a random sample from the population of interest, and (b) the costs of incorrectly predicting 0 and 1 are the same. 59 / 67

60 Example: logit(µ i ) = β 0 + β 1 gre i + β 2 gpa i + β 3 x 2i + β 4 x 3i + β 5 x 4i if we use the cutoff of 0.5, we get the following results > table(fitted(myfit)>.5,ex.data$admit) 0 1 FALSE TRUE > t1<-table(fitted(myfit)>.5,ex.data$admit) > (t1[2,1]+t1[1,2])/sum(t1) [1] 0.29 Recall that 1 means admission, 0 no admission. We misclassify people (97+19)/400=29% of the time. 60 / 67

61 Instead, let s try finding a classification rule that minimizes misclassification in our data set. > for(p in seq(.35,.9,.05)) + {t1<-table(fitted(myfit)>p, ex.data$admit) + cat(p,(t1[2,1]+t1[1,2])/sum(t1),"\n")} Error in t1[2, 1] : subscript out of bounds > max(fitted(myfit)) [1] It looks like we can t do much better than 29%. 61 / 67

62 Receiver operating characteristic (ROC) curve ROC curve is a plot of 1-specificity against sensitivity. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The true-positive rate is also known as sensitivity. The false-positive rate is also known as the fall-out or probability of false alarm, and can be calculated as (1 specificity). The ROC curve is the sensitivity as a function of fall-out. 62 / 67

63 #Roc curve p1<-matrix(0,nrow=12,ncol=3) i=1 for(p in seq(0.15,.7,.05)){ t1<-table(fitted(myfit)>p,ex.data$admit) p1[i,]=c(p,(t1[2,2])/sum(t1[,2]),(t1[1,1])/sum(t1[,1])) i=i+1 } plot(1-p1[,3],p1[,2],type = "o",xlab="1 specificity/false ylab="sensitivity/true positive rate") #p1[,2] true positive rate #p1[,3] true negative rate #1-p1[,3] false positive rate 63 / 67

64 sensitivity/true positive rate specificity/false positive rate 64 / 67

65 Comments: The area under the ROC curve can give us insight into the predictive ability of the model. If it is equal to 0.5 (an ROC curve with slope = 1), the model can be thought of as predicting at random. Values close to 1 indicate that the model has good predictive ability. It can also be thought of as a plot of the Power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). 65 / 67

66 Somers Dxy rank correlation A similar measure is Somers Dxy rank correlation between predicted probabilities and observed outcomes. It is given by D xy = 2(c 0.5) where c is the area under the ROC curve. When D xy = 0, the model is making random predictions. When D xy = 1, the model discriminates perfectly. 66 / 67

67 > library(hmisc) > somers2(fitted(myfit),ex.data$admit) C Dxy n Missing The area under the ROC curve is , and D xy = / 67

Linear Regression Models P8111

Linear Regression Models P8111 Linear Regression Models P8111 Lecture 25 Jeff Goldsmith April 26, 2016 1 of 37 Today s Lecture Logistic regression / GLMs Model framework Interpretation Estimation 2 of 37 Linear regression Course started

More information

Generalized Linear Models

Generalized Linear Models Generalized Linear Models Lecture 3. Hypothesis testing. Goodness of Fit. Model diagnostics GLM (Spring, 2018) Lecture 3 1 / 34 Models Let M(X r ) be a model with design matrix X r (with r columns) r n

More information

ST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples

ST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples ST3241 Categorical Data Analysis I Generalized Linear Models Introduction and Some Examples 1 Introduction We have discussed methods for analyzing associations in two-way and three-way tables. Now we will

More information

SCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models

SCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models SCHOOL OF MATHEMATICS AND STATISTICS Linear and Generalised Linear Models Autumn Semester 2017 18 2 hours Attempt all the questions. The allocation of marks is shown in brackets. RESTRICTED OPEN BOOK EXAMINATION

More information

Exam Applied Statistical Regression. Good Luck!

Exam Applied Statistical Regression. Good Luck! Dr. M. Dettling Summer 2011 Exam Applied Statistical Regression Approved: Tables: Note: Any written material, calculator (without communication facility). Attached. All tests have to be done at the 5%-level.

More information

Generalized Linear Models 1

Generalized Linear Models 1 Generalized Linear Models 1 STA 2101/442: Fall 2012 1 See last slide for copyright information. 1 / 24 Suggested Reading: Davison s Statistical models Exponential families of distributions Sec. 5.2 Chapter

More information

Generalized Linear Models. Last time: Background & motivation for moving beyond linear

Generalized Linear Models. Last time: Background & motivation for moving beyond linear Generalized Linear Models Last time: Background & motivation for moving beyond linear regression - non-normal/non-linear cases, binary, categorical data Today s class: 1. Examples of count and ordered

More information

1. Hypothesis testing through analysis of deviance. 3. Model & variable selection - stepwise aproaches

1. Hypothesis testing through analysis of deviance. 3. Model & variable selection - stepwise aproaches Sta 216, Lecture 4 Last Time: Logistic regression example, existence/uniqueness of MLEs Today s Class: 1. Hypothesis testing through analysis of deviance 2. Standard errors & confidence intervals 3. Model

More information

Sections 4.1, 4.2, 4.3

Sections 4.1, 4.2, 4.3 Sections 4.1, 4.2, 4.3 Timothy Hanson Department of Statistics, University of South Carolina Stat 770: Categorical Data Analysis 1/ 32 Chapter 4: Introduction to Generalized Linear Models Generalized linear

More information

Classification. Chapter Introduction. 6.2 The Bayes classifier

Classification. Chapter Introduction. 6.2 The Bayes classifier Chapter 6 Classification 6.1 Introduction Often encountered in applications is the situation where the response variable Y takes values in a finite set of labels. For example, the response Y could encode

More information

Section Poisson Regression

Section Poisson Regression Section 14.13 Poisson Regression Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 26 Poisson regression Regular regression data {(x i, Y i )} n i=1,

More information

Poisson Regression. Gelman & Hill Chapter 6. February 6, 2017

Poisson Regression. Gelman & Hill Chapter 6. February 6, 2017 Poisson Regression Gelman & Hill Chapter 6 February 6, 2017 Military Coups Background: Sub-Sahara Africa has experienced a high proportion of regime changes due to military takeover of governments for

More information

9 Generalized Linear Models

9 Generalized Linear Models 9 Generalized Linear Models The Generalized Linear Model (GLM) is a model which has been built to include a wide range of different models you already know, e.g. ANOVA and multiple linear regression models

More information

Outline of GLMs. Definitions

Outline of GLMs. Definitions Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density

More information

Generalized Linear Models

Generalized Linear Models Generalized Linear Models Advanced Methods for Data Analysis (36-402/36-608 Spring 2014 1 Generalized linear models 1.1 Introduction: two regressions So far we ve seen two canonical settings for regression.

More information

Experimental Design and Statistical Methods. Workshop LOGISTIC REGRESSION. Jesús Piedrafita Arilla.

Experimental Design and Statistical Methods. Workshop LOGISTIC REGRESSION. Jesús Piedrafita Arilla. Experimental Design and Statistical Methods Workshop LOGISTIC REGRESSION Jesús Piedrafita Arilla jesus.piedrafita@uab.cat Departament de Ciència Animal i dels Aliments Items Logistic regression model Logit

More information

ˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T.

ˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T. Exam 3 Review Suppose that X i = x =(x 1,, x k ) T is observed and that Y i X i = x i independent Binomial(n i,π(x i )) for i =1,, N where ˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T x) This is called the

More information

Generalized Linear Models. Kurt Hornik

Generalized Linear Models. Kurt Hornik Generalized Linear Models Kurt Hornik Motivation Assuming normality, the linear model y = Xβ + e has y = β + ε, ε N(0, σ 2 ) such that y N(μ, σ 2 ), E(y ) = μ = β. Various generalizations, including general

More information

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7 Introduction to Generalized Univariate Models: Models for Binary Outcomes EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7 EPSY 905: Intro to Generalized In This Lecture A short review

More information

Lecture 14: Introduction to Poisson Regression

Lecture 14: Introduction to Poisson Regression Lecture 14: Introduction to Poisson Regression Ani Manichaikul amanicha@jhsph.edu 8 May 2007 1 / 52 Overview Modelling counts Contingency tables Poisson regression models 2 / 52 Modelling counts I Why

More information

Modelling counts. Lecture 14: Introduction to Poisson Regression. Overview

Modelling counts. Lecture 14: Introduction to Poisson Regression. Overview Modelling counts I Lecture 14: Introduction to Poisson Regression Ani Manichaikul amanicha@jhsph.edu Why count data? Number of traffic accidents per day Mortality counts in a given neighborhood, per week

More information

R Hints for Chapter 10

R Hints for Chapter 10 R Hints for Chapter 10 The multiple logistic regression model assumes that the success probability p for a binomial random variable depends on independent variables or design variables x 1, x 2,, x k.

More information

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46 A Generalized Linear Model for Binomial Response Data Copyright c 2017 Dan Nettleton (Iowa State University) Statistics 510 1 / 46 Now suppose that instead of a Bernoulli response, we have a binomial response

More information

Generalized Estimating Equations

Generalized Estimating Equations Outline Review of Generalized Linear Models (GLM) Generalized Linear Model Exponential Family Components of GLM MLE for GLM, Iterative Weighted Least Squares Measuring Goodness of Fit - Deviance and Pearson

More information

Generalized linear models

Generalized linear models Generalized linear models Douglas Bates November 01, 2010 Contents 1 Definition 1 2 Links 2 3 Estimating parameters 5 4 Example 6 5 Model building 8 6 Conclusions 8 7 Summary 9 1 Generalized Linear Models

More information

Log-linear Models for Contingency Tables

Log-linear Models for Contingency Tables Log-linear Models for Contingency Tables Statistics 149 Spring 2006 Copyright 2006 by Mark E. Irwin Log-linear Models for Two-way Contingency Tables Example: Business Administration Majors and Gender A

More information

Poisson regression 1/15

Poisson regression 1/15 Poisson regression 1/15 2/15 Counts data Examples of counts data: Number of hospitalizations over a period of time Number of passengers in a bus station Blood cells number in a blood sample Number of typos

More information

Linear Methods for Prediction

Linear Methods for Prediction Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we

More information

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3 STA 303 H1S / 1002 HS Winter 2011 Test March 7, 2011 LAST NAME: FIRST NAME: STUDENT NUMBER: ENROLLED IN: (circle one) STA 303 STA 1002 INSTRUCTIONS: Time: 90 minutes Aids allowed: calculator. Some formulae

More information

Multinomial Logistic Regression Models

Multinomial Logistic Regression Models Stat 544, Lecture 19 1 Multinomial Logistic Regression Models Polytomous responses. Logistic regression can be extended to handle responses that are polytomous, i.e. taking r>2 categories. (Note: The word

More information

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS Duration - 3 hours Aids Allowed: Calculator LAST NAME: FIRST NAME: STUDENT NUMBER: There are 27 pages

More information

Regression models. Generalized linear models in R. Normal regression models are not always appropriate. Generalized linear models. Examples.

Regression models. Generalized linear models in R. Normal regression models are not always appropriate. Generalized linear models. Examples. Regression models Generalized linear models in R Dr Peter K Dunn http://www.usq.edu.au Department of Mathematics and Computing University of Southern Queensland ASC, July 00 The usual linear regression

More information

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data Ronald Heck Class Notes: Week 8 1 Class Notes: Week 8 Probit versus Logit Link Functions and Count Data This week we ll take up a couple of issues. The first is working with a probit link function. While

More information

12 Modelling Binomial Response Data

12 Modelling Binomial Response Data c 2005, Anthony C. Brooms Statistical Modelling and Data Analysis 12 Modelling Binomial Response Data 12.1 Examples of Binary Response Data Binary response data arise when an observation on an individual

More information

UNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017

UNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017 UNIVERSITY OF MASSACHUSETTS Department of Mathematics and Statistics Basic Exam - Applied Statistics Tuesday, January 17, 2017 Work all problems 60 points are needed to pass at the Masters Level and 75

More information

Lectures on Simple Linear Regression Stat 431, Summer 2012

Lectures on Simple Linear Regression Stat 431, Summer 2012 Lectures on Simple Linear Regression Stat 43, Summer 0 Hyunseung Kang July 6-8, 0 Last Updated: July 8, 0 :59PM Introduction Previously, we have been investigating various properties of the population

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Administration. Homework 1 on web page, due Feb 11 NSERC summer undergraduate award applications due Feb 5 Some helpful books

Administration. Homework 1 on web page, due Feb 11 NSERC summer undergraduate award applications due Feb 5 Some helpful books STA 44/04 Jan 6, 00 / 5 Administration Homework on web page, due Feb NSERC summer undergraduate award applications due Feb 5 Some helpful books STA 44/04 Jan 6, 00... administration / 5 STA 44/04 Jan 6,

More information

Statistical Distribution Assumptions of General Linear Models

Statistical Distribution Assumptions of General Linear Models Statistical Distribution Assumptions of General Linear Models Applied Multilevel Models for Cross Sectional Data Lecture 4 ICPSR Summer Workshop University of Colorado Boulder Lecture 4: Statistical Distributions

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: )

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: ) NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) Categorical Data Analysis (Semester II: 2010 2011) April/May, 2011 Time Allowed : 2 Hours Matriculation No: Seat No: Grade Table Question 1 2 3

More information

Binary Regression. GH Chapter 5, ISL Chapter 4. January 31, 2017

Binary Regression. GH Chapter 5, ISL Chapter 4. January 31, 2017 Binary Regression GH Chapter 5, ISL Chapter 4 January 31, 2017 Seedling Survival Tropical rain forests have up to 300 species of trees per hectare, which leads to difficulties when studying processes which

More information

Generalized Linear Models: An Introduction

Generalized Linear Models: An Introduction Applied Statistics With R Generalized Linear Models: An Introduction John Fox WU Wien May/June 2006 2006 by John Fox Generalized Linear Models: An Introduction 1 A synthesis due to Nelder and Wedderburn,

More information

STA216: Generalized Linear Models. Lecture 1. Review and Introduction

STA216: Generalized Linear Models. Lecture 1. Review and Introduction STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general

More information

Section 4.6 Simple Linear Regression

Section 4.6 Simple Linear Regression Section 4.6 Simple Linear Regression Objectives ˆ Basic philosophy of SLR and the regression assumptions ˆ Point & interval estimation of the model parameters, and how to make predictions ˆ Point and interval

More information

Generalized Linear Models

Generalized Linear Models York SPIDA John Fox Notes Generalized Linear Models Copyright 2010 by John Fox Generalized Linear Models 1 1. Topics I The structure of generalized linear models I Poisson and other generalized linear

More information

Review: what is a linear model. Y = β 0 + β 1 X 1 + β 2 X 2 + A model of the following form:

Review: what is a linear model. Y = β 0 + β 1 X 1 + β 2 X 2 + A model of the following form: Outline for today What is a generalized linear model Linear predictors and link functions Example: fit a constant (the proportion) Analysis of deviance table Example: fit dose-response data using logistic

More information

Answer Key for STAT 200B HW No. 8

Answer Key for STAT 200B HW No. 8 Answer Key for STAT 200B HW No. 8 May 8, 2007 Problem 3.42 p. 708 The values of Ȳ for x 00, 0, 20, 30 are 5/40, 0, 20/50, and, respectively. From Corollary 3.5 it follows that MLE exists i G is identiable

More information

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1 Parametric Modelling of Over-dispersed Count Data Part III / MMath (Applied Statistics) 1 Introduction Poisson regression is the de facto approach for handling count data What happens then when Poisson

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Model Estimation Example

Model Estimation Example Ronald H. Heck 1 EDEP 606: Multivariate Methods (S2013) April 7, 2013 Model Estimation Example As we have moved through the course this semester, we have encountered the concept of model estimation. Discussions

More information

Statistical Modelling with Stata: Binary Outcomes

Statistical Modelling with Stata: Binary Outcomes Statistical Modelling with Stata: Binary Outcomes Mark Lunt Arthritis Research UK Epidemiology Unit University of Manchester 21/11/2017 Cross-tabulation Exposed Unexposed Total Cases a b a + b Controls

More information

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin Regression Review Statistics 149 Spring 2006 Copyright c 2006 by Mark E. Irwin Matrix Approach to Regression Linear Model: Y i = β 0 + β 1 X i1 +... + β p X ip + ɛ i ; ɛ i iid N(0, σ 2 ), i = 1,..., n

More information

Generalized linear models

Generalized linear models Generalized linear models Søren Højsgaard Department of Mathematical Sciences Aalborg University, Denmark October 29, 202 Contents Densities for generalized linear models. Mean and variance...............................

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

Likelihoods for Generalized Linear Models

Likelihoods for Generalized Linear Models 1 Likelihoods for Generalized Linear Models 1.1 Some General Theory We assume that Y i has the p.d.f. that is a member of the exponential family. That is, f(y i ; θ i, φ) = exp{(y i θ i b(θ i ))/a i (φ)

More information

Homework 1 Solutions

Homework 1 Solutions 36-720 Homework 1 Solutions Problem 3.4 (a) X 2 79.43 and G 2 90.33. We should compare each to a χ 2 distribution with (2 1)(3 1) 2 degrees of freedom. For each, the p-value is so small that S-plus reports

More information

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F). STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis 1. Indicate whether each of the following is true (T) or false (F). (a) T In 2 2 tables, statistical independence is equivalent to a population

More information

STA 450/4000 S: January

STA 450/4000 S: January STA 450/4000 S: January 6 005 Notes Friday tutorial on R programming reminder office hours on - F; -4 R The book Modern Applied Statistics with S by Venables and Ripley is very useful. Make sure you have

More information

STAC51: Categorical data Analysis

STAC51: Categorical data Analysis STAC51: Categorical data Analysis Mahinda Samarakoon April 6, 2016 Mahinda Samarakoon STAC51: Categorical data Analysis 1 / 25 Table of contents 1 Building and applying logistic regression models (Chap

More information

( ) ( ) 2. ( ) = e b 0 +b 1 x. logistic function: P( y = 1) = eb 0 +b 1 x. 1+ e b 0 +b 1 x. Linear model: cancer = b 0. + b 1 ( cigarettes) b 0 +b 1 x

( ) ( ) 2. ( ) = e b 0 +b 1 x. logistic function: P( y = 1) = eb 0 +b 1 x. 1+ e b 0 +b 1 x. Linear model: cancer = b 0. + b 1 ( cigarettes) b 0 +b 1 x Lesson #13: Generalized Linear Models: Logistic, Poisson Regression So far, we ve fit linear models to predict continuous dependent variables. In this lesson, we ll learn how to use the Generalized Linear

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

Logistic Regressions. Stat 430

Logistic Regressions. Stat 430 Logistic Regressions Stat 430 Final Project Final Project is, again, team based You will decide on a project - only constraint is: you are supposed to use techniques for a solution that are related to

More information

Chapter 4: Generalized Linear Models-II

Chapter 4: Generalized Linear Models-II : Generalized Linear Models-II Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM [Acknowledgements to Tim Hanson and Haitao Chu] D. Bandyopadhyay

More information

Lecture 5: LDA and Logistic Regression

Lecture 5: LDA and Logistic Regression Lecture 5: and Logistic Regression Hao Helen Zhang Hao Helen Zhang Lecture 5: and Logistic Regression 1 / 39 Outline Linear Classification Methods Two Popular Linear Models for Classification Linear Discriminant

More information

Homework 5: Answer Key. Plausible Model: E(y) = µt. The expected number of arrests arrests equals a constant times the number who attend the game.

Homework 5: Answer Key. Plausible Model: E(y) = µt. The expected number of arrests arrests equals a constant times the number who attend the game. EdPsych/Psych/Soc 589 C.J. Anderson Homework 5: Answer Key 1. Probelm 3.18 (page 96 of Agresti). (a) Y assume Poisson random variable. Plausible Model: E(y) = µt. The expected number of arrests arrests

More information

Review. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 770: Categorical Data Analysis

Review. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 770: Categorical Data Analysis Review Timothy Hanson Department of Statistics, University of South Carolina Stat 770: Categorical Data Analysis 1 / 22 Chapter 1: background Nominal, ordinal, interval data. Distributions: Poisson, binomial,

More information

Generalized linear models

Generalized linear models Generalized linear models Outline for today What is a generalized linear model Linear predictors and link functions Example: estimate a proportion Analysis of deviance Example: fit dose- response data

More information

On the Inference of the Logistic Regression Model

On the Inference of the Logistic Regression Model On the Inference of the Logistic Regression Model 1. Model ln =(; ), i.e. = representing false. The linear form of (;) is entertained, i.e. ((;)) ((;)), where ==1 ;, with 1 representing true, 0 ;= 1+ +

More information

Linear Methods for Prediction

Linear Methods for Prediction This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

STA 216: GENERALIZED LINEAR MODELS. Lecture 1. Review and Introduction. Much of statistics is based on the assumption that random

STA 216: GENERALIZED LINEAR MODELS. Lecture 1. Review and Introduction. Much of statistics is based on the assumption that random STA 216: GENERALIZED LINEAR MODELS Lecture 1. Review and Introduction Much of statistics is based on the assumption that random variables are continuous & normally distributed. Normal linear regression

More information

Generalized Linear Models I

Generalized Linear Models I Statistics 203: Introduction to Regression and Analysis of Variance Generalized Linear Models I Jonathan Taylor - p. 1/16 Today s class Poisson regression. Residuals for diagnostics. Exponential families.

More information

STATISTICS 174: APPLIED STATISTICS FINAL EXAM DECEMBER 10, 2002

STATISTICS 174: APPLIED STATISTICS FINAL EXAM DECEMBER 10, 2002 Time allowed: 3 HOURS. STATISTICS 174: APPLIED STATISTICS FINAL EXAM DECEMBER 10, 2002 This is an open book exam: all course notes and the text are allowed, and you are expected to use your own calculator.

More information

BMI 541/699 Lecture 22

BMI 541/699 Lecture 22 BMI 541/699 Lecture 22 Where we are: 1. Introduction and Experimental Design 2. Exploratory Data Analysis 3. Probability 4. T-based methods for continous variables 5. Power and sample size for t-based

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

SB1a Applied Statistics Lectures 9-10

SB1a Applied Statistics Lectures 9-10 SB1a Applied Statistics Lectures 9-10 Dr Geoff Nicholls Week 5 MT15 - Natural or canonical) exponential families - Generalised Linear Models for data - Fitting GLM s to data MLE s Iteratively Re-weighted

More information

Chapter 1 Statistical Inference

Chapter 1 Statistical Inference Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations

More information

Description Syntax for predict Menu for predict Options for predict Remarks and examples Methods and formulas References Also see

Description Syntax for predict Menu for predict Options for predict Remarks and examples Methods and formulas References Also see Title stata.com logistic postestimation Postestimation tools for logistic Description Syntax for predict Menu for predict Options for predict Remarks and examples Methods and formulas References Also see

More information

Chapter 5: Logistic Regression-I

Chapter 5: Logistic Regression-I : Logistic Regression-I Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM [Acknowledgements to Tim Hanson and Haitao Chu] D. Bandyopadhyay

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 15: Examples of hypothesis tests (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 32 The recipe 2 / 32 The hypothesis testing recipe In this lecture we repeatedly apply the

More information

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

 M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2 Notation and Equations for Final Exam Symbol Definition X The variable we measure in a scientific study n The size of the sample N The size of the population M The mean of the sample µ The mean of the

More information

36-463/663: Multilevel & Hierarchical Models

36-463/663: Multilevel & Hierarchical Models 36-463/663: Multilevel & Hierarchical Models (P)review: in-class midterm Brian Junker 132E Baker Hall brian@stat.cmu.edu 1 In-class midterm Closed book, closed notes, closed electronics (otherwise I have

More information

Normal distribution We have a random sample from N(m, υ). The sample mean is Ȳ and the corrected sum of squares is S yy. After some simplification,

Normal distribution We have a random sample from N(m, υ). The sample mean is Ȳ and the corrected sum of squares is S yy. After some simplification, Likelihood Let P (D H) be the probability an experiment produces data D, given hypothesis H. Usually H is regarded as fixed and D variable. Before the experiment, the data D are unknown, and the probability

More information

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Multilevel Models in Matrix Form Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Today s Lecture Linear models from a matrix perspective An example of how to do

More information

Regression modeling for categorical data. Part II : Model selection and prediction

Regression modeling for categorical data. Part II : Model selection and prediction Regression modeling for categorical data Part II : Model selection and prediction David Causeur Agrocampus Ouest IRMAR CNRS UMR 6625 http://math.agrocampus-ouest.fr/infogluedeliverlive/membres/david.causeur

More information

Generalized Linear Models (1/29/13)

Generalized Linear Models (1/29/13) STA613/CBB540: Statistical methods in computational biology Generalized Linear Models (1/29/13) Lecturer: Barbara Engelhardt Scribe: Yangxiaolu Cao When processing discrete data, two commonly used probability

More information

Scatter plot of data from the study. Linear Regression

Scatter plot of data from the study. Linear Regression 1 2 Linear Regression Scatter plot of data from the study. Consider a study to relate birthweight to the estriol level of pregnant women. The data is below. i Weight (g / 100) i Weight (g / 100) 1 7 25

More information

STAT763: Applied Regression Analysis. Multiple linear regression. 4.4 Hypothesis testing

STAT763: Applied Regression Analysis. Multiple linear regression. 4.4 Hypothesis testing STAT763: Applied Regression Analysis Multiple linear regression 4.4 Hypothesis testing Chunsheng Ma E-mail: cma@math.wichita.edu 4.4.1 Significance of regression Null hypothesis (Test whether all β j =

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there

More information

LISA Short Course Series Generalized Linear Models (GLMs) & Categorical Data Analysis (CDA) in R. Liang (Sally) Shan Nov. 4, 2014

LISA Short Course Series Generalized Linear Models (GLMs) & Categorical Data Analysis (CDA) in R. Liang (Sally) Shan Nov. 4, 2014 LISA Short Course Series Generalized Linear Models (GLMs) & Categorical Data Analysis (CDA) in R Liang (Sally) Shan Nov. 4, 2014 L Laboratory for Interdisciplinary Statistical Analysis LISA helps VT researchers

More information

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Statistics 203: Introduction to Regression and Analysis of Variance Course review Statistics 203: Introduction to Regression and Analysis of Variance Course review Jonathan Taylor - p. 1/?? Today Review / overview of what we learned. - p. 2/?? General themes in regression models Specifying

More information

Two Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 26 May :00 16:00

Two Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 26 May :00 16:00 Two Hours MATH38052 Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER GENERALISED LINEAR MODELS 26 May 2016 14:00 16:00 Answer ALL TWO questions in Section

More information

LOGISTIC REGRESSION Joseph M. Hilbe

LOGISTIC REGRESSION Joseph M. Hilbe LOGISTIC REGRESSION Joseph M. Hilbe Arizona State University Logistic regression is the most common method used to model binary response data. When the response is binary, it typically takes the form of

More information

8 Nominal and Ordinal Logistic Regression

8 Nominal and Ordinal Logistic Regression 8 Nominal and Ordinal Logistic Regression 8.1 Introduction If the response variable is categorical, with more then two categories, then there are two options for generalized linear models. One relies on

More information

Correlation and regression

Correlation and regression 1 Correlation and regression Yongjua Laosiritaworn Introductory on Field Epidemiology 6 July 2015, Thailand Data 2 Illustrative data (Doll, 1955) 3 Scatter plot 4 Doll, 1955 5 6 Correlation coefficient,

More information

Stat/F&W Ecol/Hort 572 Review Points Ané, Spring 2010

Stat/F&W Ecol/Hort 572 Review Points Ané, Spring 2010 1 Linear models Y = Xβ + ɛ with ɛ N (0, σ 2 e) or Y N (Xβ, σ 2 e) where the model matrix X contains the information on predictors and β includes all coefficients (intercept, slope(s) etc.). 1. Number of

More information

Notes for week 4 (part 2)

Notes for week 4 (part 2) Notes for week 4 (part 2) Ben Bolker October 3, 2013 Licensed under the Creative Commons attribution-noncommercial license (http: //creativecommons.org/licenses/by-nc/3.0/). Please share & remix noncommercially,

More information

11. Generalized Linear Models: An Introduction

11. Generalized Linear Models: An Introduction Sociology 740 John Fox Lecture Notes 11. Generalized Linear Models: An Introduction Copyright 2014 by John Fox Generalized Linear Models: An Introduction 1 1. Introduction I A synthesis due to Nelder and

More information

The Flight of the Space Shuttle Challenger

The Flight of the Space Shuttle Challenger The Flight of the Space Shuttle Challenger On January 28, 1986, the space shuttle Challenger took off on the 25 th flight in NASA s space shuttle program. Less than 2 minutes into the flight, the spacecraft

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression Reading: Hoff Chapter 9 November 4, 2009 Problem Data: Observe pairs (Y i,x i ),i = 1,... n Response or dependent variable Y Predictor or independent variable X GOALS: Exploring

More information