Stat 579: Generalized Linear Models and Extensions

Similar documents
Linear Regression Models P8111

Generalized Linear Models

ST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples

SCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models

Exam Applied Statistical Regression. Good Luck!

Generalized Linear Models 1

Generalized Linear Models. Last time: Background & motivation for moving beyond linear

1. Hypothesis testing through analysis of deviance. 3. Model & variable selection - stepwise aproaches

Sections 4.1, 4.2, 4.3

Classification. Chapter Introduction. 6.2 The Bayes classifier

Section Poisson Regression

Poisson Regression. Gelman & Hill Chapter 6. February 6, 2017

9 Generalized Linear Models

Outline of GLMs. Definitions

Generalized Linear Models

Experimental Design and Statistical Methods. Workshop LOGISTIC REGRESSION. Jesús Piedrafita Arilla.

ˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T.

Generalized Linear Models. Kurt Hornik

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

Lecture 14: Introduction to Poisson Regression

Modelling counts. Lecture 14: Introduction to Poisson Regression. Overview

R Hints for Chapter 10

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46

Generalized Estimating Equations

Generalized linear models

Log-linear Models for Contingency Tables

Poisson regression 1/15

Linear Methods for Prediction

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3

Multinomial Logistic Regression Models

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator

Regression models. Generalized linear models in R. Normal regression models are not always appropriate. Generalized linear models. Examples.

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data

12 Modelling Binomial Response Data

UNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017

Lectures on Simple Linear Regression Stat 431, Summer 2012

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

Administration. Homework 1 on web page, due Feb 11 NSERC summer undergraduate award applications due Feb 5 Some helpful books

Statistical Distribution Assumptions of General Linear Models

Review of Statistics 101

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: )

Binary Regression. GH Chapter 5, ISL Chapter 4. January 31, 2017

Generalized Linear Models: An Introduction

STA216: Generalized Linear Models. Lecture 1. Review and Introduction

Section 4.6 Simple Linear Regression

Generalized Linear Models

Review: what is a linear model. Y = β 0 + β 1 X 1 + β 2 X 2 + A model of the following form:

Answer Key for STAT 200B HW No. 8

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

Model Estimation Example

Statistical Modelling with Stata: Binary Outcomes

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Generalized linear models

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

Likelihoods for Generalized Linear Models

Homework 1 Solutions

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).

STA 450/4000 S: January

STAC51: Categorical data Analysis

( ) ( ) 2. ( ) = e b 0 +b 1 x. logistic function: P( y = 1) = eb 0 +b 1 x. 1+ e b 0 +b 1 x. Linear model: cancer = b 0. + b 1 ( cigarettes) b 0 +b 1 x

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

Logistic Regressions. Stat 430

Chapter 4: Generalized Linear Models-II

Lecture 5: LDA and Logistic Regression

Homework 5: Answer Key. Plausible Model: E(y) = µt. The expected number of arrests arrests equals a constant times the number who attend the game.

Review. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 770: Categorical Data Analysis

Generalized linear models

On the Inference of the Logistic Regression Model

Linear Methods for Prediction

STA 216: GENERALIZED LINEAR MODELS. Lecture 1. Review and Introduction. Much of statistics is based on the assumption that random

Generalized Linear Models I

STATISTICS 174: APPLIED STATISTICS FINAL EXAM DECEMBER 10, 2002

BMI 541/699 Lecture 22

Central Limit Theorem ( 5.3)

SB1a Applied Statistics Lectures 9-10

Chapter 1 Statistical Inference

Description Syntax for predict Menu for predict Options for predict Remarks and examples Methods and formulas References Also see

Chapter 5: Logistic Regression-I

MS&E 226: Small Data

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

36-463/663: Multilevel & Hierarchical Models

Normal distribution We have a random sample from N(m, υ). The sample mean is Ȳ and the corrected sum of squares is S yy. After some simplification,

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

Regression modeling for categorical data. Part II : Model selection and prediction

Generalized Linear Models (1/29/13)

Scatter plot of data from the study. Linear Regression

STAT763: Applied Regression Analysis. Multiple linear regression. 4.4 Hypothesis testing

Multiple Linear Regression

LISA Short Course Series Generalized Linear Models (GLMs) & Categorical Data Analysis (CDA) in R. Liang (Sally) Shan Nov. 4, 2014

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Two Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 26 May :00 16:00

LOGISTIC REGRESSION Joseph M. Hilbe

8 Nominal and Ordinal Logistic Regression

Correlation and regression

Stat/F&W Ecol/Hort 572 Review Points Ané, Spring 2010

Notes for week 4 (part 2)

11. Generalized Linear Models: An Introduction

The Flight of the Space Shuttle Challenger

Simple Linear Regression

Transcription:

Stat 579: Generalized Linear Models and Extensions Yan Lu Jan, 2018, week 3 1 / 67

Hypothesis tests Likelihood ratio tests Wald tests Score tests 2 / 67

Generalized Likelihood ratio tests Let Y = (Y 1, Y 2,, Y n ), where Y 1, Y 2,, Y n have joint pdf f (y, θ) for θ Ω, and consider the hypothesis H 0 : θ Ω 0 v.s. H α : θ Ω Ω 0 The generalized likelihood ratio (GLR) is defined by λ(y) = max θ Ω 0 f (y; θ) max θ Ω f (y; θ) = f (y; ˆθ 0 ) f (y; ˆθ) ˆθ denote the usual MLE of θ ˆθ 0 denotes the MLE under the restriction that H 0 is true. if y f (y; θ 1,, θ k ), then under H 0 : (θ 1, θ 2,, θ r ) = (θ 10, θ 20,, θ r0 ), r < k. Approximately, for large n, 2logλ(y) χ 2 (r). An appropriate size α test is to reject H 0 if 2logλ(y) χ 2 1 α(r) 3 / 67

Example For the binary outcome, if the hypothesis is H 0 : p = p 0 v.s p p 0 then ( ) n n l(ˆµ 0 ) = log likelihood = y i log(p 0 ) + n y i log(1 p 0 ) i=1 i=1 ( ) n n l(ˆµ) = log likelihood = y i log(ˆp) + n y i log(1 ˆp) i=1 i=1 λ(y) = exp(l(ˆµ 0 ))/exp(l(ˆµ)), 2logλ(y) χ 2 1 4 / 67

Measuring goodness of fit test, Saturated Model l(y; φ, µ) = n { yj θ j (µ) b(µ) j=1 a(φ) Fit the model by ML, let ˆµ is the MLE of µ } + c(y j, φ) g(µ i ) = x iβ = β 0 + x i1 β 1 + + x i(p 1) β p 1 the maximized value of the log-likelihood is l(y; φ, ˆµ) = { } n yj θ j (ˆµ) b(ˆµ) j=1 + c(y j, φ) a(φ) Now fit the alternate model g(µ i ) = x iβ + δ τ = β 0 + x i1 β 1 + + x i(p 1) β p 1 +τ 0 + δ i1 τ 1 + + δ i(r 1) τ r 1 where for now let r = n p. We have n observations and n regression parameters. 5 / 67

g(µ i ) = β 0 + x i1 β 1 + + x i(p 1) β p 1 +τ 0 + δ i1 τ 1 + + δ i(r 1) τ r 1 where for now let r = n p -we have n observations and n regression parameters. If there are no linear dependencies among the predictors then this is the so-called saturated model, which places no constraints on g(µ i ) and consequently no constraints on µ i. Let µ be the MLE of µ for the saturated model, we have µ = y. score equation under the saturated model is X W (y µ) = 0 where X is the n n extended design matrix with rows [x i, δ i], by computation, X is invertible, as are W and XW (y µ) = 0, y µ = 0 µ = y 6 / 67

The likelihood ratio test (LRT) statistic is the ratio of the likelihood at the hypothesized parameter values (reduced model) to the likelihood of the data (saturated model) at the MLE(s). λ(y) = likelihood of reduced model likelihood of saturated model g(µ i ) = β 0 + x i1 β 1 + + x i(p 1) β p 1 +τ 0 + δ i1 τ 1 + + δ i(r 1) τ r 1 The likelihood ratio statistics for testing H 0 : τ = 0 is 2logλ(y) = 2[l(y; φ, y) l(y; φ, ˆµ)] Because the alternative model is saturated, this is also viewed as a measure of how well the null model g(µ i ) = x iβ is fitted by the data. 7 / 67

Deviance { } yj θ j b(θ j ) f (y j θ j, φ) = exp + c(y j, φ) a(φ) Let ˆµ β be the MLE at H 0, and µ β be the MLE for the saturated model, and let a j (φ) = φ/w j. 2logλ(y) = 1 φ n 2w j [(θ j (y) θ j (ˆµ))y j (b j (y) b j (ˆµ))] j=1 = 1 D(y, ˆµ) φ D(y, ˆµ) is called deviance 1 D(y, ˆµ) is called scaled deviance, usually denoted as φ D (y, ˆµ) 8 / 67

Example: Normal distribution deviance Normal distribution { } yµ 0.5µ f (y µ, σ 2 2 ) = exp σ 2 + c(y, σ) θ = µ, b(θ) = 0.5µ 2, a(φ) = σ 2 /w j, w j = 1, φ = σ 2 n { ( 1 Deviance = 2 (y j ˆµ j )y j 2 y j 2 1 )} 2 ˆµ2 j = = = j=1 n j=1 n j=1 { 2 yj 2 ˆµ j y j 1 2 y j 2 + 1 } 2 ˆµ2 j { y 2 j n (y j ˆµ j ) 2 j=1 2ˆµ j y j + ˆµ 2 } j = sum of residuals 9 / 67

Distribution and Deviance Distribution Binomial Poisson Normal Deviance 2 ( ) ( ) j y yj nj y j jlog + (n j y j )log n j ˆµ j n j n j ˆµ j 2 { ( ) } yj j y j log (y j ˆµ j ) w j ˆµ j j (y j ˆµ j ) 2 w j the larger the deviance, the poorer the fit of the model, large values of D(y; ˆµ) suggest a general lack of fit of the model model fits perfectly, D(y; ˆµ) = 0 10 / 67

Remarks: 1. The standard Poisson and binomial have φ = 1, deviance =scaled deviance 2. In certain settings, D (y, ˆµ) χ 2 n p, number of parameters difference between null and saturated model -for normal, this result is exact, but of not much practical use, since don t know σ 2 - for binomial model, y j Bin(n j, µ j ), j = 1, 2,, n this assumes n j large, and the number of binomial observations, n fixed. -for Poisson model, y j Poisson(µ j ), j = 1, 2,, n, this requires µ i large, n fixed -under suitable conditions for the Binomial and Poisson models, this results can be used to test the adequacy of the model by using the p-values p-value = P(χ 2 n p > D obs ) where D obs is the observed values of scaled and raw deviances (i.e, φ = 1). 11 / 67

3. Even if the approximation breaks down, we can show that E(D (y, ˆµ)) n p Since large values of D (y, ˆµ) suggest lack-of-fit, many researcher recommend comparing D (y, ˆµ) to n p to provide a rough idea of lack-of-fit, D (y, ˆµ) n p < 1, no evidence of lack of fit D (y, ˆµ) n p >> 1, some suggestion of lack of fit However, there is no accepted cutoff for how much greater than 1 the scaled deviance must be to indicate lack-of-fit. 12 / 67

4. Scaled deviance D provides information on whether the model fits the data, while tests on regression coefficients assess the significance of effects assuming the model fits. 5. Alternative GOF, generalized pearson statistic X 2 = n j=1 w j (y j ˆµ j ) 2 V (ˆµ j ) Scaled pearson statistic X 2 = X 2 These are used analogously to the deviance and scaled deviance. φ 13 / 67

Examples of Pearson statistics For poisson distribution, y Poisson(µ) f (y) = P(Y = y) = e µ µ y /y! = exp {ylogµ µ logy!} θ = logµ, φ = 1, w j = 1, b(θ) = µ = e θ, b (θ) = e θ = µ V (y) = E(y). X 2 = n j=1 w j (y j ˆµ j ) 2 V (ˆµ j ) = n (y j ˆµ j ) 2 j=1 ˆµ j X 2 reduces to the usual Pearson statistics 14 / 67

Comparing models Model (1): smaller model (reduced model), g(µ i ) = x i β Model (2): larger model (full model), g(µ i ) = x i β + δ i τ The alternative larger model is not necessary the saturated model, i.e, p + r < n, the number of parameters in Model (2) can be less than number of observations n. Assume φ is fixed, the likelihood ratio test for comparing (1) and (2), is equivalent to test H 0 : τ = 0 D (ˆµ 2, ˆµ 1 ) = (D (y, ˆµ 1 ) D (y, ˆµ 2 ) χ 2 r where ˆµ 1 and ˆµ 2 are MLEs under models (1) and (2) respectively, r is the number of parameters in τ, that is, the df of smaller model - df large model 15 / 67

Wald Tests Wald test: The Wald test statistic is a function of the difference in the MLE and the hypothesized value, normalized by an estimate of the standard deviation of the MLE. binary example, for large n, W χ 2 1 In general { L {( ˆβ ˆτ ) ( β τ W = (ˆp ˆp 0) 2 ˆp(1 ˆp)/n )}} { L I 1 ( ˆβ ˆα ) } 1 {( ˆβ L L ˆτ ) ( β τ )} where l = rank(l) χ 2 l 16 / 67

For example: testing H 0 : β i = 0 or H 0 : L β = 0 L = [0, 0,, 1, 0,, 0], ith element is 1. testing H 0 : β i = β j or H 0 : β i β j = 0 or H 0 : L β = 0 L = [0, 0,, 1, 0,, 1, 0, 0], ith element and jth element are 1 and -1 respectively. Several linear restrictions H 0 : β 2 + β 3 = 1, β 4 + β 6 = 0, β 5 + β 6 = 0 β 1 0 1 1 0 0 0 L = 0 0 0 1 0 1 β 2, β = 0 0 0 0 1 1., c = β 6 L β = c, rank(l) = 3. 1 0 0 17 / 67

Score tests If the MLE equals the hypothesized value, p 0, then p 0 would maximize the likelihood and U(p 0 ) = 0. The score statistic measures how far from zero the score function is when evaluated at the null hypothesis. The test statistic for the binary outcome example is S = U(p 0 ) 2 /I (p 0 ) S χ 2 1 LRT, Wald, score tests are asymptotically equivalent. 18 / 67

Example: logistic regression with admission data logit(µ i ) = β 0 + β 1 x i1 + + β p 1 x i(p 1) = x iβ l = µ i = E(y i ) = exp(x i β) 1 + exp(x i β) n ( exp(x ) ( ) y i log i β) n ( 1 + exp(x i β) + n y i log 1 exp(x i β) ) 1 + exp(x i β) i=1 i=1 The p score functions can not be solved analytically. It is common to use a numerical algorithm, such as the Newton-Raphson algorithm to obtain the MLEs. The information matrix I is a p p matrix of the partial second derivatives with respect to the parameters, the inverted information matrix is the covariance matrix for ˆβ. 19 / 67

Testing a single logistic regression coefficient To test a single logistic regression coefficient, we will use the Wald test, ˆβ j β j0 se( ˆ ˆβ j ) N(0, 1) se( ˆ ˆβ j ) is calculated by taking the inverse of the estimated information matrix. -This value is given to you in the R output for β j0 = 0 As in linear regression, this test is conditional on all other coefficients being in the model. 20 / 67

Fitting glm in R, we have the following results myfit0 <- glm(admit ~ gpa, data = ex.data, family = "binomial") summary(myfit0) Estimate Std. Error z value Pr(> z ) (Intercept) -4.3576 1.0353-4.209 2.57e-05 *** gpa 1.0511 0.2989 3.517 0.000437 *** The fitted model is logit(µ i ) = 4.3576 + 1.0511 gpa i The column labelled z value is the Wald test statistic. 3.517 = 1.0511/0.2989, since p-value << 0, reject H 0 : β 1 = 0, conclude that GPA has an significant effect on log odds of admission. 21 / 67

Confidence intervals for the coefficients and the odds ratios logit(µ i ) = β 0 + β 1 x i1 + + β p 1 x i(p 1) = x iβ A (1 α) 100% confidence interval for β j, j = 0, 1,, p 1 can be calculated as ˆβ j ± Z 1 α/2 se( ˆ ˆβ j ) The (1 α) 100% confidence interval for the odds ratio over a one unit change in x j is [ ] exp( ˆβ j Z 1 α/2 se( ˆ ˆβ j )), exp( ˆβ j + Z 1 α/2 se( ˆ ˆβ j )) 22 / 67

Example Fit admission status with gre, gpa and rank Coefficients: ## Estimate Std. Error z value Pr(> z ) ## (Intercept) -3.989979 1.139951-3.500 0.000465 *** ## gre 0.002264 0.001094 2.070 0.038465 * ## gpa 0.804038 0.331819 2.423 0.015388 * ## rank2-0.675443 0.316490-2.134 0.032829 * ## rank3-1.340204 0.345306-3.881 0.000104 *** ## rank4-1.551464 0.417832-3.713 0.000205 *** odds ratio with one unit change in gpa is exp(0.804038) = 2.2345448 95% CI of odds ratio for one unit change in gpa is [exp(0.8040 1.96 0.3318), exp(0.8040 + 1.96 0.3318)] = [e 0.1537, e 1.4543 ] = [1.1661, 4.2816] 23 / 67

exp(cbind(or = coef(myfit), confint(myfit))) ## Waiting for profiling to be done... ## OR 2.5 % 97.5 % ## (Intercept) 0.0185001 0.001889165 0.1665354 ## gre 1.0022670 1.000137602 1.0044457 ## gpa 2.2345448 1.173858216 4.3238349 ## rank2 0.5089310 0.272289674 0.9448343 ## rank3 0.2617923 0.131641717 0.5115181 ## rank4 0.2119375 0.090715546 0.4706961 24 / 67

Testing a single logistic regression variable using LRT x 2 = logit(µ i ) = β 0 + β 1 gre i + β 2 gpa i + β 3 x 2i + β 4 x 3i + β 5 x 4i { { { 1 if rank 2 1 if rank 3 1 if rank 4 x 0 otherwise 3 = x 0 otherwise 4 = 0 otherwise want to test the effect of variable rank, i.e. H 0 : β 3 = β 4 = β 5 = 0 model under the null hypothesis is logit(µ i ) = β 0 + β 1 gre i + β 2 gpa i 2logλ(y) = 2(l(ˆβ H 0 ) l(ˆβ H α )) need to know both l(ˆβ H 0 ) and l(ˆβ H α ) fit two models: the full model with gre, gpa and rank the reduced model under H 0, with only gre and gpa. 25 / 67

Then l(ˆβ H 0 ) is the log-likelihood from the model under H 0, and l(ˆβ H α ) is the log-likelihood from the full model. 2logλ(y) χ 2 3. 26 / 67

Reduced model with gre and gpa myfit2<-glm(admit ~ gre + gpa, data = ex.data, family = "binomial") summary(myfit2) ## ## Coefficients: ## Estimate Std. Error z value Pr(> z ) ## (Intercept) -4.949378 1.075093-4.604 4.15e-06 *** ## gre 0.002691 0.001057 2.544 0.0109 * ## gpa 0.754687 0.319586 2.361 0.0182 * ## --- ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 499.98 on 399 degrees of freedom ## Residual deviance: 480.34 on 397 degrees of freedom ## AIC: 486.34 27 / 67

Full model with gre, gpa and rank myfit <- glm(admit ~ gre + gpa + rank, data = ex.data, family = "binomial") summary(myfit) ## ## Coefficients: ## Estimate Std. Error z value Pr(> z ) ## (Intercept) -3.989979 1.139951-3.500 0.000465 *** ## gre 0.002264 0.001094 2.070 0.038465 * ## gpa 0.804038 0.331819 2.423 0.015388 * ## rank2-0.675443 0.316490-2.134 0.032829 * ## rank3-1.340204 0.345306-3.881 0.000104 *** ## rank4-1.551464 0.417832-3.713 0.000205 *** ## Null deviance: 499.98 on 399 degrees of freedom ## Residual deviance: 458.52 on 394 degrees of freedom ## AIC: 470.52 28 / 67

Compare two models the 2log λ is listed as the residual deviance from the output of summary(). For the full model, 2logλ = 458.52. - For the reduced model, 2logλ = 480.34 Deviance = 480.34 458.52 = 21.82 > 7.814728 = χ 2 0.95 (3) - reject the null hypothesis, conclude that reduced model is not adequate. anova(myfit, myfit2) ## Analysis of Deviance Table ## Model 1: admit ~ gre + gpa + rank ## Model 2: admit ~ gre + gpa ## Resid. Df Resid. Dev Df Deviance ## 1 394 458.52 ## 2 397 480.34-3 -21.826 qchisq(0.95,3) ## [1] 7.814728 pchisq(21.826,3,lower.tail = FALSE) ## [1] 7.090117e-05 29 / 67

Testing groups of variables using the LRT Suppose instead of testing just one variable, we wanted to test a group of variables. This follows naturally from the likelihood ratio test. Let s look at it by example. logit(µ i ) = β 0 + β 1 gre i + β 2 gpa i + β 3 x 2i + β 4 x 3i + β 5 x 4i want to test H 0 : β 1 = β 2 = β 3 = β 4 = β 5 = 0 versus the full model. 30 / 67

Reduced model: intercept model myfit0<-glm(admit ~ 1, data = ex.data, family = "binomial") summary(myfit0) ## Coefficients: ## Estimate Std. Error z value Pr(> z ) ## (Intercept) -0.7653 0.1074-7.125 1.04e-12 *** ## (Dispersion parameter for binomial family taken to be 1) ## Null deviance: 499.98 on 399 degrees of freedom ## Residual deviance: 499.98 on 399 degrees of freedom ## AIC: 501.98 Notice that null deviance and residual deviance are the same, since we didn t use any x information in the modeling. 31 / 67

Compare the intercept model with full model anova(myfit0,myfit,test="chisq") ## Analysis of Deviance Table ## ## Model 1: admit ~ 1 ## Model 2: admit ~ gre + gpa + rank ## Resid. Df Resid. Dev Df Deviance Pr(>Chi) ## 1 399 499.98 ## 2 394 458.52 5 41.459 7.578e-08 *** Reject the reduced model, in favor of the full model. df = 5 32 / 67

Model selection upper<-formula(~gre+gpa+rank,data=ex.data) model.aic = step(myfit0, scope=list(lower= ~., upper= upper)) ## Start: AIC=501.98 ## admit ~ 1 ## ## Df Deviance AIC ## + rank 3 474.97 482.97 ## + gre 1 486.06 490.06 ## + gpa 1 486.97 490.97 ## <none> 499.98 501.98 The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. AIC provides a means for model selection. 33 / 67

## Step: AIC=472.88 ## admit ~ rank + gpa ## ## Df Deviance AIC ## + gre 1 458.52 470.52 ## <none> 462.88 472.88 ## - gpa 1 474.97 482.97 ## - rank 3 486.97 490.97 ## ## Step: AIC=470.52 ## admit ~ rank + gpa + gre ## ## Df Deviance AIC ## <none> 458.52 470.52 ## - gre 1 462.88 472.88 ## - gpa 1 464.53 474.53 ## - rank 3 480.34 486.34 34 / 67

The smallest AIC = 470.52, with variables rank, gpa and gre The second smallest one with AIC =472.88, with variables rank and gpa By model comparison for these two models, we would like to choose the full model with rank, gpa and gre. 35 / 67

Wald test #test that the coefficient for rank=2 is equal to the coefficient for rank=3 l <- cbind(0, 0, 0, 1, -1, 0) wald.test(b = coef(myfit), Sigma = vcov(myfit), L = l) ## Wald test: ## ---------- ## ## Chi-squared test: ## X2 = 5.5, df = 1, P(> X2) = 0.019 Since p-value for the test is 0.019, conclude that the coefficient for rank=2 is not equal to the coefficient for rank=3, or there is a significant difference between the effect on log odds of admission from rank 2 and rank 3 university applicants. 36 / 67

Assessment of model fit Model selection Residuals: can be useful for identifying potential outliers (observations not well fit by the model) or misspecified models. Residuals not very useful in logistic regression. -Raw residual Deviance residuals -Pearson residuals Influence Cook s distance: measures the influent of case i on all of the fitted g i s Leverage Prediction 37 / 67

Residuals 1. Raw residuals: y j ˆµ j, these are called response residuals for GLM s. Since the variance of the response is not constant for most GLM s we need some modification. 2. Deviance residuals d j The deviance residual for the jth observation d j is the signed square root of the contribution of the jth case to the sum for the model deviance. n d j = sign(y j ˆµ j ) dj 2, D(y, ˆµ) = useful for determining if individual points are not well fit by the model j=1 you can get the deviance residuals using the function residuals() in R d 2 j 38 / 67

3. Pearson residuals Γ j wj Γ j = V (ˆµ j ) (y j ˆµ j ) X 2 = n j=1 Γ2 j Example: for poisson distribution, y Poisson(µ) f (y) = P(Y = y) = e µ µ y /y! = exp {ylogµ µ logy!} θ = logµ, φ = 1, w j = 1, b(θ) = µ = e θ, b (θ) = e θ = µ V (µ j ) = µ j, V (ˆµ j ) = e ˆθ j Pearson residual: Γ j = (y j ˆµ j ) ˆµj Recall that Deviance for poisson is 2 { ( ) } yj y j log (y j ˆµ j ) w j ˆµ j j { ( ) } yj d j = sign(y j ˆµ j ) 2 y j log (y j ˆµ j ) ˆµ j 39 / 67

Example: logistic regression µ i log = 1 µ ˆβ 0 + ˆβ 1 x i1 + ˆβ 2 x i2 i ˆµ i : fitted probabilities raw residual: y i ˆµ i y i ˆµ i Pearson residuals: Γ i = ˆµi (1 ˆµ i ) this is based on the idea of subtracting off the mean and dividing by the standard deviation -if we replace ˆµ i by µ i, then Γ i has mean 0 and variance 1. Deviance residuals: based on the contribution of each point to the likelihood For logistic regression, l = { } n i=1 y i logˆµ i + (1 y i )logˆ(1 µ i ) - { } d j = sign(y j ˆµ j ) 2 y i logˆµ i + (1 y i )logˆ(1 µ i ) if y i = 1, sign(y j ˆµ j ) = 1 -if y i = 0, sign(y j ˆµ j ) = 1 40 / 67

Each of these type of residuals can be squared and added together to create an (residual sum of squares) RSS-like statistic -Deviance: D = n i=1 d i 2 -Pearson statistic: X 2 = n i=1 Γ2 i 41 / 67

4. Scaled Pearson and Deviance residuals Γ j φ = y j ˆµ j V (ˆµ i ) φ w j Recall Var(y) = b (θ)φ = V (θ)φ w w the scaled Pearson residual centers and scales y j by its estimated mean and standard deviation Hence, the scaled Pearson residuals are standardized. Γ j d j, φ φ both scaled Pearson and Deviance residuals are approximately having mean 0 and variance 1 42 / 67

D (y; ˆµ) = 1 φ D(y, ˆµ) = 1 φ E(D (y; ˆµ)) n p ( ) dj Var n p = 1 p/n φ n On average, this is less than 1, but not too much if p is small relative to n j d 2 j 43 / 67

5. Standardized Pearson and Deviance residuals Γ pj = Γ j φ(1 hjj ), Γ D j = d j φ(1 hjj ) this adjust the scaled residuals to have mean 0 and variance 1, h jj is the jth case leverage, defined as the diagonal elements of the hat matrix H = W 1 2 X(X WX) 1 X W 1 2 -W 1 2 is the diagonal matrix with diagonal element w ii note that ˆµ i Hy Generally speaking, the standardized deviance residuals tend to be preferable because they are more symmetric than the standardized Pearson residuals, but both are commonly used 44 / 67

6. Studentized deleted residuals recall that in linear regression there were a number of diagnostic measures based on the idea of leaving observation i out, refitting the model, and seeing how various things changed (residual, coefficient, estimates, fitted values) The same idea can be extended to generalized linear models Γ pj = Γ j, Γ Dj = φ ( j) (1 h jj ) d j φ ( j) (1 h jj ) Studentized residuals less than -2 and greater than +2 deserve closer inspection. 45 / 67

7. Outliers A primary use of residuals is in detecting outliers. -observations whose values deviate from the expected range and produce extremely large residuals What is an outlier for 0, 1 data? difficult to claim that seeing either of 1 or 0 constitutes an outlier. too many 0s or 1s in situations where we would not expect them (for example: too many 1s that we think have a small p i ), this usually suggest a lack of fit. perfectly reasonable observations can have unusually large residuals 46 / 67

Influential data, if removing the observation substantially changes the estimate of coefficients or fitted probabilities An observation with an extreme value on a predictor variable is called a point with high leverage. Leverage is a measure of how far an independent variable deviates from its mean. In fact, the leverage indicates the geometric extremeness of an observation in the multi-dimensional covariate space. -These leverage points can have an unusually large effect on the estimate of logistic regression coefficients Leverages greater than 2 h or 3 h cause concerns, where h = p/n 47 / 67

plot(hatvalues(myfit)) hatvalues(myfit) 0.01 0.02 0.03 0.04 0.05 0 100 200 300 400 Index 48 / 67

> highleverage <- which(hatvalues(myfit) >.045) #0.45 = 3*p/n = 3*6/400 > hatvalues(myfit)[highleverage] 373 0.04921401 > ex.data[373,] admit gre gpa rank 373 1 680 2.42 1 > myfit$fit[373] 373 0.3765075 > mgre 1 2 3 4 611.8033 596.0265 574.8760 570.1493 > mgpa 1 2 3 4 3.453115 3.361656 3.432893 3.318358 49 / 67

8. Cook s distance If ˆβ is the MLE of β under the model g(µ i ) = x iβ and ˆβ ( j) is the MLE based on the data but holding out the jth observation, then cooks distance for case j is c k = 1 p (ˆβ ˆβ ( j) ) [ Var(ˆβ)] 1 (ˆβ ˆβ ( j) ) = 1 p (ˆβ ˆβ ( j) ) X ŴX(ˆβ ˆβ ( j) ) Some package doesn t scale c j by p. 50 / 67

plot(cooks.distance(myfit)) cooks.distance(myfit) 0.000 0.005 0.010 0.015 0.020 0 100 200 300 400 Index 51 / 67

> max(cooks.distance(myfit)) [1] 0.01941192 > highcook <- which((cooks.distance(myfit)) >.05) #0.05 is simply a very small critical number in $F$ distribution > cooks.distance(myfit)[highcook] named numeric(0) 52 / 67

Comments: In a binomial setup where all n i are big the standardized deviance residuals should be closed to Gaussian. The normal probability plot can be used to check this. In a Poisson setup where the counts are big the standardized deviance residuals should be closed to Gaussian. The normal probability plot can be used to check this. In a binomial setup where x i (number of successes) are very small in some of the groups numerical problems sometimes occur in the estimation. This is often seen in very large standard errors of the parameter estimates. 53 / 67

Residuals are less informative for logistic regression than they are for linear regression: yes/no (1 or 0) outcomes contain less information than continuous ones the fact that the adjusted response depends on the fit hampers our ability to use residuals as external checks on the model We are making fewer distributional assumptions in logistic regression, so there is no need to inspect residuals for, say, skewness or non constant variance Issues of outliers and influential observations are just as relevant for logistic regression and GLM models as they are for linear regression If influential observations are present, it may or may not be appropriate to change the model, but you should at least understand why some observations are so influential 54 / 67

Prediction Fitted probabilities: ###prediction, fitted probabilities myfit$fit[1:20] #fitted probabilities ## 1 2 3 4 5 ## 0.17262654 0.29217496 0.73840825 0.17838461 0.11835391 6 7 8 9 10 0.36996994 0.41924616 0.21700328 0.20073518 0.51786820 ## 11 12 13 14 15 ##0.37431440 0.40020025 0.72053858 0.35345462 0.6923798 ## 16 17 18 19 20 ## 0.18582508 0.33993917 0.07895335 0.54022772 0.5735118 55 / 67

Predicted probabilities: mgre<-tapply(ex.data$gre, ex.data$rank, mean) # mean of gre by rank mgpa<-tapply(ex.data$gpa, ex.data$rank, mean) # mean of gpa by rank newdata1 <- with(ex.data, data.frame(gre = mgre, gpa = mgpa, rank = factor(1:4))) newdata1 ## gre gpa rank ## 1 611.8033 3.453115 1 ## 2 596.0265 3.361656 2 ## 3 574.8760 3.432893 3 ## 4 570.1493 3.318358 4 56 / 67

newdata1$rankp <- predict(myfit, newdata = newdata1, type = "response") newdata1 ## gre gpa rank rankp ## 1 611.8033 3.453115 1 0.5428541 ## 2 596.0265 3.361656 2 0.3514055 ## 3 574.8760 3.432893 3 0.2195579 ## 4 570.1493 3.318358 4 0.1704703 The predicted probability of being accepted into a graduate program is 0.5429 for students from the highest prestige undergraduate institutions (rank= 1), with gre = 611.8 and gpa=3.45. 57 / 67

Translate the estimated probabilities into a predicted outcome 1. Use 0.5 as a cutoff. if ˆµ i for a new observation is greater than 0.5, its predicted outcome is y = 1. - if ˆµ i for a new observation is less than or equal to 0.5, its predicted outcome is y = 0. This approach is reasonable when (a) it is equally likely in the population of interest that the outcomes 0 and 1 will occur, and (b) the costs of incorrectly predicting 0 and 1 are approximately the same. 58 / 67

2. Find the best cutoff for the data set on which the logistic regression model is based. we evaluate different cutoff values and for each cutoff value, calculate the proportion of observations that are incorrectly predicted. select the cutoff value that minimizes the proportion of incorrectly predicted outcomes. This approach is reasonable when (a) the data set is a random sample from the population of interest, and (b) the costs of incorrectly predicting 0 and 1 are the same. 59 / 67

Example: logit(µ i ) = β 0 + β 1 gre i + β 2 gpa i + β 3 x 2i + β 4 x 3i + β 5 x 4i if we use the cutoff of 0.5, we get the following results > table(fitted(myfit)>.5,ex.data$admit) 0 1 FALSE 254 97 TRUE 19 30 > t1<-table(fitted(myfit)>.5,ex.data$admit) > (t1[2,1]+t1[1,2])/sum(t1) [1] 0.29 Recall that 1 means admission, 0 no admission. We misclassify people (97+19)/400=29% of the time. 60 / 67

Instead, let s try finding a classification rule that minimizes misclassification in our data set. > for(p in seq(.35,.9,.05)) + {t1<-table(fitted(myfit)>p, ex.data$admit) + cat(p,(t1[2,1]+t1[1,2])/sum(t1),"\n")} 0.35 0.325 0.4 0.3 0.45 0.3075 0.5 0.29 0.55 0.29 0.6 0.3025 0.65 0.3075 0.7 0.315 Error in t1[2, 1] : subscript out of bounds > max(fitted(myfit)) [1] 0.7384082 It looks like we can t do much better than 29%. 61 / 67

Receiver operating characteristic (ROC) curve ROC curve is a plot of 1-specificity against sensitivity. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The true-positive rate is also known as sensitivity. The false-positive rate is also known as the fall-out or probability of false alarm, and can be calculated as (1 specificity). The ROC curve is the sensitivity as a function of fall-out. 62 / 67

#Roc curve p1<-matrix(0,nrow=12,ncol=3) i=1 for(p in seq(0.15,.7,.05)){ t1<-table(fitted(myfit)>p,ex.data$admit) p1[i,]=c(p,(t1[2,2])/sum(t1[,2]),(t1[1,1])/sum(t1[,1])) i=i+1 } plot(1-p1[,3],p1[,2],type = "o",xlab="1 specificity/false ylab="sensitivity/true positive rate") #p1[,2] true positive rate #p1[,3] true negative rate #1-p1[,3] false positive rate 63 / 67

sensitivity/true positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1 specificity/false positive rate 64 / 67

Comments: The area under the ROC curve can give us insight into the predictive ability of the model. If it is equal to 0.5 (an ROC curve with slope = 1), the model can be thought of as predicting at random. Values close to 1 indicate that the model has good predictive ability. It can also be thought of as a plot of the Power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). 65 / 67

Somers Dxy rank correlation A similar measure is Somers Dxy rank correlation between predicted probabilities and observed outcomes. It is given by D xy = 2(c 0.5) where c is the area under the ROC curve. When D xy = 0, the model is making random predictions. When D xy = 1, the model discriminates perfectly. 66 / 67

> library(hmisc) > somers2(fitted(myfit),ex.data$admit) C Dxy n Missing 0.6928413 0.3856826 400.0000000 0.0000000 The area under the ROC curve is 0.6928413, and D xy = 0.3856826. 67 / 67