Generalized linear models

Similar documents
Review: what is a linear model. Y = β 0 + β 1 X 1 + β 2 X 2 + A model of the following form:

Non-Gaussian Response Variables

Generalized Linear Models

Generalised linear models. Response variable can take a number of different formats

Generalized linear models

Regression models. Generalized linear models in R. Normal regression models are not always appropriate. Generalized linear models. Examples.

Investigating Models with Two or Three Categories

Linear Regression Models P8111

9 Generalized Linear Models

ST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples

Nonlinear Models. What do you do when you don t have a line? What do you do when you don t have a line? A Quadratic Adventure

Model Based Statistics in Biology. Part V. The Generalized Linear Model. Chapter 18.1 Logistic Regression (Dose - Response)

Week 7 Multiple factors. Ch , Some miscellaneous parts

Generalized Linear Models

Modeling Overdispersion

Tento projekt je spolufinancován Evropským sociálním fondem a Státním rozpočtem ČR InoBio CZ.1.07/2.2.00/

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).

12 Modelling Binomial Response Data

1. Hypothesis testing through analysis of deviance. 3. Model & variable selection - stepwise aproaches

11. Generalized Linear Models: An Introduction

Multinomial Logistic Regression Models

Exercise 5.4 Solution

BMI 541/699 Lecture 22

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46

Poisson Regression. James H. Steiger. Department of Psychology and Human Development Vanderbilt University

Introduction to General and Generalized Linear Models

Chapter 22: Log-linear regression for Poisson counts

Categorical data analysis Chapter 5

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data

Logistic Regression. James H. Steiger. Department of Psychology and Human Development Vanderbilt University

Exam Applied Statistical Regression. Good Luck!

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION. ST3241 Categorical Data Analysis. (Semester II: ) April/May, 2011 Time Allowed : 2 Hours

Generalized Linear Models: An Introduction

Two Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 26 May :00 16:00

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1

Generalized Linear Models 1

A Handbook of Statistical Analyses Using R. Brian S. Everitt and Torsten Hothorn

Model Estimation Example

ST3241 Categorical Data Analysis I Multicategory Logit Models. Logit Models For Nominal Responses

Poisson regression: Further topics

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).

Statistical Methods III Statistics 212. Problem Set 2 - Answer Key

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

Final Exam. Name: Solution:

Normal distribution We have a random sample from N(m, υ). The sample mean is Ȳ and the corrected sum of squares is S yy. After some simplification,

Generalized Linear Models (GLZ)

Logistic Regression. Interpretation of linear regression. Other types of outcomes. 0-1 response variable: Wound infection. Usual linear regression

Logistic Regressions. Stat 430

Logistic Regression 21/05

Generalized linear models for binary data. A better graphical exploratory data analysis. The simple linear logistic regression model

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator

Logistic regression. 11 Nov Logistic regression (EPFL) Applied Statistics 11 Nov / 20

STA102 Class Notes Chapter Logistic Regression

McGill University. Faculty of Science. Department of Mathematics and Statistics. Statistics Part A Comprehensive Exam Methodology Paper

R Hints for Chapter 10

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: )

Experimental Design and Statistical Methods. Workshop LOGISTIC REGRESSION. Jesús Piedrafita Arilla.

Introduction to the Generalized Linear Model: Logistic regression and Poisson regression

Analysis of Categorical Data. Nick Jackson University of Southern California Department of Psychology 10/11/2013

Stat/F&W Ecol/Hort 572 Review Points Ané, Spring 2010

Explanatory variables are: weight, width of shell, color (medium light, medium, medium dark, dark), and condition of spine.

Introduction to General and Generalized Linear Models

Generalized Linear Models. Last time: Background & motivation for moving beyond linear

7/28/15. Review Homework. Overview. Lecture 6: Logistic Regression Analysis

Exam details. Final Review Session. Things to Review

Homework 5: Answer Key. Plausible Model: E(y) = µt. The expected number of arrests arrests equals a constant times the number who attend the game.

Log-linear Models for Contingency Tables

Generalized Linear Model under the Extended Negative Multinomial Model and Cancer Incidence

A Handbook of Statistical Analyses Using R 2nd Edition. Brian S. Everitt and Torsten Hothorn

Overdispersion Workshop in generalized linear models Uppsala, June 11-12, Outline. Overdispersion

Ron Heck, Fall Week 8: Introducing Generalized Linear Models: Logistic Regression 1 (Replaces prior revision dated October 20, 2011)

STAT5044: Regression and Anova

Wrap-up. The General Linear Model is a special case of the Generalized Linear Model. Consequently, we can carry out any GLM as a GzLM.

Chapter 5: Logistic Regression-I

Basic Medical Statistics Course

SOS3003 Applied data analysis for social science Lecture note Erling Berge Department of sociology and political science NTNU.

Ch16.xls. Wrap-up. Many of the analyses undertaken in biology are concerned with counts.

Lecture 14: Introduction to Poisson Regression

Modelling counts. Lecture 14: Introduction to Poisson Regression. Overview

MODULE 6 LOGISTIC REGRESSION. Module Objectives:

Simple logistic regression

Lecture 12: Effect modification, and confounding in logistic regression

Sections 4.1, 4.2, 4.3

Introducing Generalized Linear Models: Logistic Regression

Model Based Statistics in Biology. Part V. The Generalized Linear Model. Chapter 16 Introduction

Analysing categorical data using logit models

STAC51: Categorical data Analysis

LISA Short Course Series Generalized Linear Models (GLMs) & Categorical Data Analysis (CDA) in R. Liang (Sally) Shan Nov. 4, 2014

STA216: Generalized Linear Models. Lecture 1. Review and Introduction

Hierarchical Generalized Linear Models. ERSH 8990 REMS Seminar on HLM Last Lecture!

Chapter 3: Generalized Linear Models

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3

Generalized Linear Models for Non-Normal Data

Regression and Generalized Linear Models. Dr. Wolfgang Rolke Expo in Statistics C3TEC, Caguas, October 9, 2015

Chapter 1 Statistical Inference

Part 8: GLMs and Hierarchical LMs and GLMs

Chapter 4: Generalized Linear Models-II

Chapter 1. Modeling Basics

UNIVERSITY OF MASSACHUSETTS Department of Mathematics and Statistics Applied Statistics Friday, January 15, 2016

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

Transcription:

Generalized linear models Outline for today What is a generalized linear model Linear predictors and link functions Example: estimate a proportion Analysis of deviance Example: fit dose- response data using logistic regression Example: fit count data using a log- linear model Advantages and assumptions of glm Quasi- likelihood models when there is excessive variance

Review: what is a (general) linear model A model of the following form: Y = β 0 + β 1 X 1 + β 2 X 2 +... + error Y is the response variable The X s are the explanatory variables The β s are the parameters of the linear equation The errors are normally distributed with equal variance at all values of the X variables. Uses least squares to fit model to data, estimate parameters lm in R The predicted Y, symbolized here by µ, is modeled as µ = β 0 + β 1 X 1 + β 2 X 2 +...

What is a generalized linear model A model whose predicted values are of the form g(µ) = β 0 + β 1 X 1 + β 2 X 2 +... The model still include a linear predictor (to right of = ) g(µ) is the link function Wide diversity of link functions accommodated Non- normal distributions of errors OK (specified by link function) Unequal error variances OK (specified by link function) Uses maximum likelihood to estimate parameters Uses log- likelihood ratio tests to test parameters glm in R

The two most common link functions Log used for count data η = log µ The inverse function is µ = e η Logistic or logit used for binary data η = log µ 1 µ The inverse function is µ = eη 1 + e η In all cases log refers to natural logarithm (base e).

Example 1: Estimate a proportion Example used previously in Likelihood lecture: The wasp, Trichogramma brassicae, rides on female cabbage white butterflies, Pieris brassicae. When a butterfly lays her eggs on a cabbage, the wasp climbs down and parasitizes the freshly laid eggs. Fatouros et al. (2005) carried out trials to determine whether the wasps can distinguish mated female butterflies from unmated females. In each trial a single wasp was presented with two female cabbage white butterflies, one a virgin female, the other recently mated. Y = 23 of 32 wasps tested chose the mated female. What is the proportion p of wasps in the population choosing the mated female?

Number of wasps choosing mated female fits a binomial distribution Under random sampling, the number of successes in n trials has a binomial distribution, with p being the probability of success in any one trial. To model these data, let success be wasp chose mated butterfly Y = 23 successes n = 32 trials What is p?

Previously we used maximum likelihood to estimate p The maximum likelihood estimate is that value for the parameter having the highest (log)likelihood, given the data. lnl[p 23 choose mated] = ln 32 + 23ln p 23 [ ] + 9ln[ 1 p]

Consider first how we could use lm to estimate a population mean Estimate a mean of a variable using a random sample of the population. If the data are approximately normally distributed, we could fit a linear model, µ = β where µ is the mean in the population. In R, use lm to estimate, z <- lm(y ~ 1) summary(z)

A population proportion is also a mean Each wasp has a measurement of 1 or 0 (for number of successes ). The mean of these 0 s and 1 s is the proportion we wish to estimate. But the data are not normally- distributed: they are binary Instead of a linear model, we must use a generalized linear model with a link function appropriate for binary data. g(µ) = β

Use glm to estimate the proportion We need a link function appropriate for binary data. logit(µ) = β or equivalently, log µ 1 µ = β where µ is the population proportion (i.e., p, but let s stick with µ here to be consistent with glm notation)

Use glm to estimate a proportion z <- glm(y ~ 1, family = binomial(link= logit )) Formula structure is the same as it would be using lm. Here, family specifies the error distribution and link function.

Use glm to estimate a proportion summary(z) Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) 0.9383 0.3932 2.386 0.017 * 0.9383 is the estimate of β (the mean on the logit scale) Convert back to ordinary scale (plug into inverse equation) to get estimate of population proportion ˆ µ = e β ˆ β 1 + e ˆ = e0.9383 = 0.719 0.9383 1 + e This is the ML estimate of the population proportion. [In the workshop we ll obtain likelihood- based confidence intervals too.]

Use glm to test a null hypothesis about a proportion summary(z) Coefficients: Estimate Std. Error z value Pr(> z ) (Intercept) 0.9383 0.3932 2.386 0.017 * The z- value (a.k.a Wald statistic) and P- value test the null hypothesis that β = β 0 = 0. This is the same as a test of the null hypothesis that the true (population) proportion is 0.5, because e β 0 1 + e β 0 = e0 1 + e 0 = 0.5 Agresti (2002, Categorical data analysis, 2 nd ed., Wiley) says that for small to moderate sample size, the Wald test is usually less reliable than the log- likelihood ratio test. So I have crossed it out!

Use glm to test a null hypothesis about a proportion I calculated the log- likelihood ratio test for these data in the Likelihood lecture. Here we ll use glm to accomplish the same task. Full model (β is estimated from data): z <- glm(y ~ 1, family = binomial(link= logit )) Reduced model (β = 0): z0 <- glm(y ~ -1, family = binomial(link= logit ))

Use glm to test a null hypothesis about a proportion anova(z0, z, test = "Chi") # Analysis of deviance Model 1: y ~ -1 Model 2: y ~ 1 # Reduced model # Full model Resid. Df Resid. Dev Df Deviance P(> Chi ) 1 32 44.361 2 31 38.024 1 6.337 0.01182 * The deviance is the log- likelihood ratio statistic (G- statistic), it has an approximate χ 2 distribution under the null hypothesis. Looking back to the Likelihood lecture, we see that the result is the same: L[0.72 23 chose mated female] = 0.1553 L[0.50 23 chose mated female] = 0.00653 G = 2ln(0.1553 / 0.00653) = 6.336 This is because glm is based on maximum likelihood.

[break]

Example 2: Logistic regression One of the most common uses of generalized linear models Data: 72 rhesus monkeys (Macacus rhesus) exposed for 1 minute to aerosolized preparations of anthrax (Bacillus anthracis). Want to estimate the relationship between dose and probability of death.

Logistic regression Measurements of individuals are 1 (dead) or 0 (alive) Linear regression model not appropriate because For each X the Y observations are binary, not normal For every X the variance of Y is not constant A linear relationship is not bounded between 0 and 1 0-1 data can t simply be transformed

The generalized linear model g(µ) = β 0 + β 1 X µ is the probability of death, which depends on concentration X. g(µ) is the link function. Linear predictor (right side of equation) is like an ordinary linear regression, with intercept β 0 and slope β 1 Logistic regression uses the logit link function z <- glm(mortality ~ concentration, family = binomial(link = "logit"))

The generalized linear model logit( µ ) = β 0 + β 1 X glm uses maximum likelihood: the method finds those values of β 0 and β 1 for which the data have maximum probability of occurring. These are the maximum likelihood estimates. No formula for the solution. glm uses an iterative procedure to find the maximum likelihood estimates.

The generalized linear model z <- glm(mortality ~ concentration, family = binomial(link = "logit")) summary(z) Estimate Std. Error z value Pr(> z ) (Intercept) -1.74452 0.69206-2.521 0.01171 * concentration 0.03643 0.01119 3.255 0.00113 ** Number of Fisher Scoring iterations: 5 Numbers in red are the estimates of β 0 and β 1 (intercept and slope) on the logit scale. Number of Fisher Scoring iterations in the output refers to the number of iterations used before the algorithm used by glm converged on the maximum likelihood solution.

The generalized linear model Use predict(z) to obtain predicted values on the logit scale ˆ η = 1.7445 + 0.03643X Use fitted(z) to obtain predicted values on the original scale η ˆ µ = e ˆ 1 + e ˆ η Can calculate (very approximate) confidence bands as in ordinary regression

The generalized linear model The parameter estimates from the model fit can be used to estimate LD50, the concentration at which 50% of individuals are expected to die. LD50 = intercept slope = (0.03643) ( 1.7445) = 47.88 library(mass) dose.p(z, p=0.50) Dose SE p = 0.5: 47.8805 8.168823

Residual plots in glm Residuals are calculated on the transformed scale (here, the logit). 1. Deviance residuals. plot(z) or plot(fitted(z), residuals(z)) Discrete nature of the data introduces stripes. These aren t the real residuals of the fitted glm model. Deviance residuals identify contributions of each point to total deviance, which measures overall fit of the model to data.

Residual plots in glm 2. Pearson residuals. plot(fitted(z), residuals(z, type= pearson )) These aren t the real residuals either. They are a rescaled version of the real working residuals, corrected for their associated weights.

Leverage plot in glm Leverage estimates the effect that each point has on the model parameter estimates. Obtained as hatvalues(z) Leverage can range between 1/n (here, 0.013) and 1. The graph here shows that even though one of the data points has a large residual, it does not have large leverage. plot(z)

Advantages of generalized linear models More flexible than simply transforming variables. (A given transformation of the raw data may not accomplish both linearity and homogeneity of variance.) Yields more familiar measures of the response variable than data transformations. (E.g., how to interpret arcsine square root). Avoids the problems associated with transforming 0 s and 1 s. (For example, log(0), logit(0) and logit(1) can t be computed.) Retains the same analysis framework as linear models.

Assumptions of generalized linear models Statistical independence of data points. Correct specification of the link function for the data. The variances of the residuals correspond to that expected from the link function. Later, I will show a method for dealing with excessive variance.

Example 3: Analyzing count data with log- linear regression Number of offspring fledged by female song sparrows on Mandarte Island, BC. http://commons.wikimedia.org/wiki/ File:Song_Sparrow- 27527-2.jpg Data are discrete counts. Variance increases with mean. Poisson distribution might be appropriate. Use log link function.

The generalized linear model Log- linear regression (a.k.a. Poisson regression) uses the log link function log(µ) = β 0 + β 1 X 1 + β 2 X 2 +... µ is the mean number of offspring, which depends on year Year is a categorical variable (a factor in R). Linear predictor (right side of equation) is like an ordinary ANOVA (modeled in R using dummy variables, same as with lm)

The generalized linear model z <- glm(noffspring ~ year, family = poisson(link = "log")) summary(z) Estimate Std. Error z value Pr(> z ) (Intercept) 0.24116 0.26726 0.902 0.366872 year1976 1.03977 0.31497 3.301 0.000963 *** year1977 0.96665 0.28796 3.357 0.000788 *** year1978 0.97700 0.28013 3.488 0.000487 *** year1979-0.03572 0.29277-0.122 0.902898 (Dispersion parameter for poisson family taken to be 1) Numbers in red are the parameter estimates on the log scale. As when analyzing a single categorical variable, the intercept refers to the mean of the first group (1975) and the rest of the coefficients are differences between each given group (year) and the first group. Dispersion parameter of 1 assumes that variance=mean.

The generalized linear model Predicted values on the log scale predict(z) ˆ η = 0.24116 +1.03977(year ==1976) +... Predicted values on the original scale fitted.values(z) ˆ µ = log( ˆ η )

The generalized linear model Analysis of deviance table gives log- likelihood ratio test of the null hypothesis that there is no differences among years in mean number of offspring. anova(z,test="chi") Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev P(> Chi ) NULL 145 288.656 year 4 75.575 141 213.081 1.506e-15 ***

The generalized linear model As with lm, we can also fit the means model to get estimates of group means (here on the log scale). z1 <- glm(noffspring ~ year-1, family=poisson(link="log")) summary(z1) Estimate Std. Error z value Pr(> z ) year1975 0.24116 0.26726 0.902 0.3669 year1976 1.28093 0.16667 7.686 1.52e-14 *** year1977 1.20781 0.10721 11.266 < 2e-16 *** year1978 1.21816 0.08392 14.516 < 2e-16 *** year1979 0.20544 0.11952 1.719 0.0856 Back- transform to get estimates of means on original scale exp(coef(summary(z1))[,1]) year1975 year1976 year1977 year1978 year1979 1.272727 3.600000 3.346154 3.380952 1.228070

Likelihood based confidence intervals We can also get likelihood- based confidence intervals for the group means (command part of the MASS library) library(mass) confint(z1) # log scale 2.5 % 97.5 % year1975-0.33273518 0.7228995 year1976 0.93546683 1.5907293 year1977 0.99005130 1.4108270 year1978 1.04904048 1.3782407 year1979-0.03833633 0.4308985 exp(confint(z1)) # back- transformed to original scale 2.5 % 97.5 % year1975 0.7169600 2.060399 year1976 2.5484028 4.907326 year1977 2.6913725 4.099344 year1978 2.8549105 3.967915 year1979 0.9623892 1.538639

Evaluating assumptions of the glm fit Residual plot (deviance residuals) Can look strange because of discrete data. Variances look roughly homogeneous.

Evaluating assumptions of the glm fit Do the variances of the residuals correspond to that expected from the link function? The log link assumes that the Y are Poisson distributed at each X A central property of the Poisson distribution is that the variance and mean are equal (dispersion parameter = 1). tapply(noffspring,year,mean) tapply(noffspring,year,var) 1975 1976 1977 1978 1979 1.272727 3.600000 3.346154 3.380952 1.228070 means 1.618182 6.044444 3.835385 4.680604 1.322055 variances Variances slightly, but not alarmingly, larger than means. (Note: the binomial link function also assumes a strict mean- variance relationship, specified by binomial dist n (dispersion parameter = 1).)

Modeling excessive variance Finding excessive variance ( overdispersion ) is typical when analyzing count data. Excessive variance occurs because other variables not included in the model affect the response variable. In the workshop we will analyze an example where the problem is more severe than in the case of the song sparrow data here.

Modeling excessive variance Excessive variance can be accommodated in glm by using a different link function, one that incorporates a dispersion parameter (which must also be estimated). A dispersion parameter >> 1 implies excessive variance. Procedure is based on the relationship between mean and variance rather than an explicit probability distribution for the data. In the case of count data, variance = (dispersion parameter) * (mean) Method generates quasi- likelihood estimates that behave like maximum likelihood estimates.

Modeling excessive variance Lets try it with the song sparrow data, and use the means model z2 <- glm(noffspring ~ year-1, family=quasipoisson, data=x) summary(z) Estimate Std. Error t value Pr(> t ) year1975 0.2412 0.2965 0.813 0.417 year1976 1.2809 0.1849 6.928 1.40e-10 *** year1977 1.2078 0.1189 10.155 < 2e-16 *** year1978 1.2182 0.0931 13.085 < 2e-16 *** year1979 0.2054 0.1326 1.549 0.124 Dispersion parameter for quasipoisson family taken to be 1.230689 The point estimates are identical with those obtained earlier, but the standard error (and resulting confidence intervals) are wider. The dispersion parameter is reasonably close to 1 for these data, so original approach using poisson link probably ok.

Other uses of generalized linear models We have used glm to model binary frequency data, and count data. The method is also commonly used to model r x c (and higher order) contingency tables, in which cell counts depend on two (or more) categorical variables each of which may have more than two categories or groups. Finally, glm can handle data having other probability distributions than the ones used in my examples, including normal and gamma distributions.

Discussion paper for next week: Whittingham et al (2006) Why do we still use stepwise modelling? Download from assignments tab on course web site. Presenters: Allison &.. Moderators:.. &..