Introduction to Generalized Linear Models

Similar documents
ST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples

STA216: Generalized Linear Models. Lecture 1. Review and Introduction

Applying Generalized Linear Models

Generalized Linear Models

STAT5044: Regression and Anova

Linear Regression Models P8111

Generalized Linear Models. Last time: Background & motivation for moving beyond linear

Linear model A linear model assumes Y X N(µ(X),σ 2 I), And IE(Y X) = µ(x) = X β, 2/52

STA 216: GENERALIZED LINEAR MODELS. Lecture 1. Review and Introduction. Much of statistics is based on the assumption that random

LOGISTIC REGRESSION Joseph M. Hilbe

Generalized linear models

Generalized Linear Models and Extensions

Generalized Linear Models 1

Review: what is a linear model. Y = β 0 + β 1 X 1 + β 2 X 2 + A model of the following form:

Logistic regression. 11 Nov Logistic regression (EPFL) Applied Statistics 11 Nov / 20

Generalized Linear Models Introduction

9 Generalized Linear Models

SB1a Applied Statistics Lectures 9-10

Generalized Linear Models

Generalized Linear. Mixed Models. Methods and Applications. Modern Concepts, Walter W. Stroup. Texts in Statistical Science.

Generalized Linear Models I

Tento projekt je spolufinancován Evropským sociálním fondem a Státním rozpočtem ČR InoBio CZ.1.07/2.2.00/

Sections 4.1, 4.2, 4.3

Generalized Estimating Equations

Generalized Linear Models for Non-Normal Data

SCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models

Outline of GLMs. Definitions

Chapter 4: Generalized Linear Models-II

11. Generalized Linear Models: An Introduction

Generalized Linear Models (GLZ)

GENERALIZED LINEAR MODELS Joseph M. Hilbe

Generalized Linear Models: An Introduction

CHOOSING AMONG GENERALIZED LINEAR MODELS APPLIED TO MEDICAL DATA

Introduction to General and Generalized Linear Models

Generalized Linear Models. Kurt Hornik

Applied Generalized Linear Mixed Models: Continuous and Discrete Data

Chapter 1. Modeling Basics

Chapter 4: Generalized Linear Models-I

Poisson regression: Further topics

Semiparametric Generalized Linear Models

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2)

Statistical Inference & Model Checking for Poisson Regression

Homework 5: Answer Key. Plausible Model: E(y) = µt. The expected number of arrests arrests equals a constant times the number who attend the game.

Review. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 770: Categorical Data Analysis

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data

COMPLEMENTARY LOG-LOG MODEL

Generalized linear models

Lecture 14: Introduction to Poisson Regression

Modelling counts. Lecture 14: Introduction to Poisson Regression. Overview

Mixed models in R using the lme4 package Part 5: Generalized linear mixed models

Single-level Models for Binary Responses

Likelihoods for Generalized Linear Models

High-Throughput Sequencing Course

Statistics of Contingency Tables - Extension to I x J. stat 557 Heike Hofmann

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

H-LIKELIHOOD ESTIMATION METHOOD FOR VARYING CLUSTERED BINARY MIXED EFFECTS MODEL

Generalized Linear Models and Extensions

Introduction to General and Generalized Linear Models

Normal distribution We have a random sample from N(m, υ). The sample mean is Ȳ and the corrected sum of squares is S yy. After some simplification,

Mixed models in R using the lme4 package Part 5: Generalized linear mixed models

Lecture 8. Poisson models for counts

Poisson regression 1/15

Varieties of Count Data

Generalized Models: Part 1

Figure 36: Respiratory infection versus time for the first 49 children.

Linear Methods for Prediction

The GENMOD Procedure. Overview. Getting Started. Syntax. Details. Examples. References. SAS/STAT User's Guide. Book Contents Previous Next

Chapter 5: Generalized Linear Models

,..., θ(2),..., θ(n)

STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).

Open Problems in Mixed Models

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

Model Selection for Semiparametric Bayesian Models with Application to Overdispersion

Generalized Linear Models

Analysing categorical data using logit models

LISA Short Course Series Generalized Linear Models (GLMs) & Categorical Data Analysis (CDA) in R. Liang (Sally) Shan Nov. 4, 2014

Regression models. Generalized linear models in R. Normal regression models are not always appropriate. Generalized linear models. Examples.

Model Based Statistics in Biology. Part V. The Generalized Linear Model. Chapter 18.1 Logistic Regression (Dose - Response)

MIT Spring 2016

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Overdispersion Workshop in generalized linear models Uppsala, June 11-12, Outline. Overdispersion

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3

Chapter 22: Log-linear regression for Poisson counts

Generalized linear models

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: )

Modeling Overdispersion

GLM I An Introduction to Generalized Linear Models

Non-maximum likelihood estimation and statistical inference for linear and nonlinear mixed models

Mixed models in R using the lme4 package Part 7: Generalized linear mixed models

Linear Regression With Special Variables

Introduction to Generalized Models

Logistic regression: Miscellaneous topics

GLM models and OLS regression

Multiple Logistic Regression for Dichotomous Response Variables

Generalized Linear Models. stat 557 Heike Hofmann

Lecture 13: More on Binary Data

NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION. ST3241 Categorical Data Analysis. (Semester II: ) April/May, 2011 Time Allowed : 2 Hours

Categorical data analysis Chapter 5

Analysis of 2 n Factorial Experiments with Exponentially Distributed Response Variable

Transcription:

Introduction to Generalized Linear Models Edps/Psych/Soc 589 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2018

Outline Introduction (motivation and history). Review ordinary linear regression. Components of a GLM. 1. Random component. 2. Structural component. 3. Link function. Natural exponential family (technical). Normal Linear Regression re-visited. GLMs for binary data (introduction). Primary Example: High School & Beyond. 1. Linear model for π. 2. Cumulative Distribution functions (alternative links). 3. Logistic regression. 4. Probit models. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 2.1/ 36

Outline (continued) GLMs for count data. 1. Poisson regression for counts. Example: Number of deaths due to AIDs. 2. Poisson regression for rates. Example: Number of violent incidents. Inference and model checking. 1. Wald, Likelihood ratio, & Score test. 2. Checking Poisson regression. 3. Residuals. 4. Confidence intervals for fitted values (means). 5. Overdispersion. Fitting GLMS (a little technical). 1. Newton-Raphson algorithm/fisher scoring. 2. Statistic inference & the Likelihood function. 3. Deviance. Summary C.J. Anderson (Illinois) Introduction to GLM Fall 2018 3.1/ 36

Introduction to Generalized Linear Modeling Benefits of a model that fits well: The structural form of the model describes the patterns of interactions or associations in data. Inference for the model parameters provides a way to evaluate which explanatory variable(s) are related to the response variable(s) while (statistically) controlling for other variables. Estimated model parameters provide measures of the strength and (statistical) importance of effects. A model s predicted values smooth the data they provide good estimates of the mean of the response variable. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 4.1/ 36

Advantages of a Modeling Approach Over Significance Testing Models can handle more complicated situations. For example, Breslow-Day is limited to 2 2 K tables and does not provide estimates of common odds ratios for tables larger than 2 2. Loglinear models can be used to test for homogeneous association in I J K (or higher way) tables and provide estimates of common odds ratios. With models, the focus is on estimating parameters that describe relationships between/among variables. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 5.1/ 36

A Little History From Lindsey (who summary that from McCullagh & Nelder who got a lot from Stiegler) Multiple linear regression normal distribution & identity link (Legendre, Guass: early 19th century). ANOVA normal distribution & identity link (Fisher: 1920 s 1935). Likelihood function a general approach to inference about any statistical model (Fisher, 1922). Dilution assays a binomial distribution with complementary log-log link (Fisher, 1922). Exponential family class of distributions with sufficient statistics for parameters (Fisher, 1934). Probit analysis binomial distribution & probit link (Bliss, 1935). C.J. Anderson (Illinois) Introduction to GLM Fall 2018 6.1/ 36

A Little History (continued) Logit for proportions binomial distribution & logit link (Berkson, 1944; Dyke & Patterson, 1952) Item analysis Bernoulli distribution & logit link (Rasch, 1960). Log linear models for counts Poisson distribution & log link (Birch, 1963). Regressions for survival data exponential distribution & reciprocal or log link (Feigl & Zelen, 1965; Zippin & Armitage, 1966; Glasser, 1967). Inverse polynomials Gamma distribution & reciprocal link (Nelder, 1966). Nelder & Wedderburn (1972): provided unification. They showed All the previously mentioned models are special cases of a general model, Generalized Linear Models The MLE for all these models could be obtained using same algorithm. All of the models listed have distributions in the Exponential Dispersion Family C.J. Anderson (Illinois) Introduction to GLM Fall 2018 7.1/ 36

Software Developments Computer software development in the 70 s: GLIM Short for Generalized Linear Interactive Modelling. Any statistician or researcher could fit a larger class of models (not restricted to normal). Growing recognition of the likelihood function as central to all statistical inference. Allowed experimental development of many new methods & uses for which it was never originally imagined. PROC GENMOD (GENeralized linear MODels) in SAS. glm package in R. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 8.1/ 36

Limitations Linear function Responses must be independent There are ways around these by going to a slightly more general models and using more general software (e.g., SAS/NLMIXED, GLIMMIX, NLP, GAMs). R has specialized packages for some of the models that are not linear and/or dependent (e.g., packages lme4, logmult, gam). C.J. Anderson (Illinois) Introduction to GLM Fall 2018 9.1/ 36

Review of Ordinary Linear Regression Linear (in the parameters) model for continuous/numerical response variable (Y) and continuous and/or discrete explanatory variables (X s). Y i = α+β 1 x 1i +β 2 x 2i +e i where e i N(0,σ 2 ) and independent. This linear model includes Multiple regression ANOVA ANCOVA C.J. Anderson (Illinois) Introduction to GLM Fall 2018 10.1/ 36

Simple Linear Regression Y i = α+βx i +e i where e i N(0,σ 2 ) and independent. We consider X as fixed, so Y i N(µ (xi ),σ 2 ). In regression, the focus is on the mean or expected value of Y, i.e., E(Y i ) = E(α+βx i +e i ) = α+βx i +E(e i ) = α+βx i C.J. Anderson (Illinois) Introduction to GLM Fall 2018 11.1/ 36

GLMs go beyond Simple Linear Regression Generalized Linear Models go beyond this in two major respects: The response variable(s) can have a distribution other than normal any distribution within a class of distributions known as exponential family of distributions. The relationship between the response (Y) and explanatory variables need not be simple ( identity ). For example, instead of Y = α+βx we can allow for transformations of Y g(y) = α+βx These are ideas we ll come back to after we go through an example... C.J. Anderson (Illinois) Introduction to GLM Fall 2018 12.1/ 36

Counts of T 4 cells/mm in Blood Samples From Lindsey (1997) from Altman (1991). The counts equal T 4 cells/mm in blood samples from 20 patients in remission from Hodgkin s disease & 20 other patients in remission from disseminated malignancies: Hodgkin s Non-Hodgkin s 396 568 375 375 1212 171 752 208 554 1104 151 116 257 435 736 192 295 397 315 1252 288 1004 657 700 431 795 440 771 1621 1378 688 426 902 958 410 979 1283 2415 377 503 Is there a difference in cell counts between the two diseases? C.J. Anderson (Illinois) Introduction to GLM Fall 2018 13.1/ 36

What is Meant by Difference? Mean counts Variability Overall form of the distribution Naive Approach: Assume a normal distribution and do a t-test (i.e., compute difference between means and divide by s.e. of difference). More Sophisticated approach: Assume a Poisson distribution and compute difference between log of the means (i.e., ratio of means). C.J. Anderson (Illinois) Introduction to GLM Fall 2018 14.1/ 36

Summary of some possibilities and results AIC Likelihood ratio Wald No Difference Estimate Model difference Difference in 2 log(l) /s.e. Normal 608.8 606.4 4.5 2.17 Normal log link 608.8 606.3 4.5 2.04 Gamma 591.2 587.9 5.2 2.27 Inverse Gaussian 589.9 588.1 3.8 1.87 Poisson 11652.0 10285.3 1368.96 36.52 Negative Binomial 591.1 587.9 5.3 2.37 AIC = weighs goodness-of-fit & model complexity (smaller is better) Wald = ( parameter/ standard error) 2. I get slightly different results than published and between SAS & R. Assumptions? C.J. Anderson (Illinois) Introduction to GLM Fall 2018 15.1/ 36

Considering Assumptions T 4 cells/mm and Hodgkin s disease data continued Independence of observations Recall that with Poission Sample statistics µ = σ 2 disease N Mean Variance hodgkin 20 823.20 320792.27 non-hodgkin 20 521.15 85568.77 Homogeneity assumption is suspect. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 16.1/ 36

Components of Generalized Linear Models There are 3 components of a generalized linear model (or GLM): 1. Random Component identify the response variable (Y) and specify/assume a probability distribution for it. 2. Systematic Component specify what the explanatory or predictor variables are (e.g., X 1, X 2, etc). These variable enter in a linear manner α+β 1 X 1 +β 2 X 2 +...+β k X k 3. Link Specify the relationship between the mean or expected value of the random component (i.e., E(Y)) and the systematic component. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 17.1/ 36

Components of Simple linear regression Y i = α+βx i +ǫ i Random component: Y is the response variable and is normally distributed...generally we assume ǫ i N(0,σ 2 ). X is the explanatory variable is linear in the parameters... Identity link. α+βx i g(e(y i )) = E(Y i ) = α+βx i Closer look at each of these components... C.J. Anderson (Illinois) Introduction to GLM Fall 2018 18.1/ 36

Random Component Let N = sample size and suppose that we have Y 1,Y 2,...,Y N observations on our response variable and that the observations are all independent. Y s are discrete variables where Y is either Dichotomous (binary) with a Counts (including cells of a fixed numbers of trials. contingency table): success/failure Number of people who die correct/incorrect from AIDS during a given agree/disagree time period. academic/non-academic program Number of times a child tries to take a toy away from another child. Number of times patents generated by firms. Binomial distribution. Poisson distribution C.J. Anderson (Illinois) Introduction to GLM Fall 2018 19.1/ 36

Distributions for Discrete Variables Thus the two distributions we will be primarily using are Binomial Poisson With GLMs, you can use any distribution that belongs to the exponential family of distributions. This is a wide class of distributions that have many of the nice properties of the Normal distribution. (we ll look at this in a bit more detail later). C.J. Anderson (Illinois) Introduction to GLM Fall 2018 20.1/ 36

Systematic Component As in ordinary regression, we will be modelling means. The focus is on the expected value of our response variable E(Y) = µ We want to investigate whether and how µ varies as a function of the levels of our predictor or explanatory variables, X s. The systematic component of the model consists of a set of explanatory variables and some linear function of them. β o +β 1 x 1 +β 2 x 2 +β 3 x 3 +...+β k x k. This linear combination of our explanatory variables is referred to as a linear predictor. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 21.1/ 36

Linear Predictor This restriction to a linear predictor is not all that restrictive. For example, x 3 = x 1 x 2 an interaction. x 1 x 2 1 a curvilinear relationship. x 2 log(x 2 ) a curvilinear relationship. β o +β 1 x 2 1 +β 2log(x 2 )+β 3 x 2 1 log(x 2) This part of the model is very much like what you know with respect to ordinary linear regression. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 22.1/ 36

The Link Function Left hand side of an Right hand side of the equation/model the equation the systematic random component, component; that is, E(Y) = µ α+β 1 x 1 +β 2 x 2 +...+β k x k We now need to link the two sides. How is µ = E(Y) related to α+β 1 x 1 +β 2 x 2 +...+β k x k? We do this using a Link Function = g(µ) g(µ) = α+β 1 x 1 +β 2 x 2 +...+β k x k C.J. Anderson (Illinois) Introduction to GLM Fall 2018 23.1/ 36

More about the Link Function Important things about g(.): This function g(.) is monotone as the systematic part gets larger, µ gets larger (or smaller). The relationship between E(Y) and the systematic part can be non-linear. Some common links are Identity (ordinary regression, ANOVA, ANCOVA): E(Y) = α+βx Log link which is often used when Y is nonnegative (i.e., 0 Y) log(e(y)) = log(µ) = α+βx This yields a loglinear model. Logit link, which is often used when 0 µ 1 (when response is dictohomous/binary and we re interested in a probability). log(µ/(1 µ)) = α+βx C.J. Anderson (Illinois) Introduction to GLM Fall 2018 24.1/ 36

General Model Formula for a GLM g(µ) = α+β 1 x 1 +β 2 x 2 +...+β k x k The links ones given on previous slide and below are special ones (depending on the assumed distribution): Distribution Natural Parameter Canonical Link Normal µ Identity Poisson log(µ) log Binomial log(µ/(1 µ)) logit C.J. Anderson (Illinois) Introduction to GLM Fall 2018 25.1/ 36

Natural Exponential Family of Distributions One-Parameter Exponential Distribution Probability density or mass functions belonging to the natural exponential family have the general form where f(y i ;θ i ) = exp{a(y i )b(θ i ) c(θ i )+d(y i )} y i is an observation (i = 1,...,N). θ i is the parameter of the distribution for i and b(θ i ) is the location parameter (i.e., the mean; other parameters such are variance are often considered nuisance parameters). a(.), b(.), c(.), and d(.) are all functions. When a(y i ) = y i, then the density/mass is in canonical form, and we have f(y i ;θ i ) = exp{y i b(θ i ) c(θ i )+d(y i )} When in canonical form, the natural parameter is b(θ i ). C.J. Anderson (Illinois) Introduction to GLM Fall 2018 26.1/ 36

According to Webester s Dictionary Canonical means conforming to a general rule reduced to the simplest or clearest scheme possible the simplest form of a matrix (specifically the form of a square matrix that has zero off-diagonals). Now for some examples... C.J. Anderson (Illinois) Introduction to GLM Fall 2018 27.1/ 36

The Poisson Distribution where f(y;µ) = µy e µ y = 0,1,2,... θ = µ (the parameter of the distribution). Now to put this in canonical form: ( ( µ y e µ )) f(y;µ) = exp log y! = exp ( y log(µ)+e µ log(y!) ) y! = exp(yb(µ) c(µ)+d(y)) a(y) = y b(µ) = log(µ), the natural parameter. c(µ) = exp( µ). d(y) = log(y!). The canonical link for the Poisson distribution: log( ). C.J. Anderson (Illinois) Introduction to GLM Fall 2018 28.1/ 36

The Binomial Distribution f(y;π) = ( n y ) π y (1 π) n y where y = 0,1,...,n. n = number of trials. π = probability of a success. π is the parameter of interest and n is assumed to be known. We now re-express the distribution as ( [( ]) n f(y;π) = exp log )π y (1 π) n y y ( (( n = exp y log(π)+(n y)log(1 π)+log ( n = exp(y log(π/(1 π))+nlog(1 π)+log y = exp(yb(π) c(π)+d(y)) y ))) ) ) C.J. Anderson (Illinois) Introduction to GLM Fall 2018 29.1/ 36

Canonical Form of the Binomial Distribution ( n f(y;π) = exp(y log(π/(1 π))+nlog(1 π)+log y = exp(yb(π) c(π)+d(y)) where a(y) = y b(π) = log(π/(1 π)), the natural parameter c(π) = nlog(1 π) d(y) = log (( ny ) ). The canonical link is the logit the log of the odds ) ) C.J. Anderson (Illinois) Introduction to GLM Fall 2018 30.1/ 36

Exponential Dispersion Family Generalization of the one-parameter exponential family: includes a constant scale parameter φ. The canonical form of the exponential dispersion family: [ ] yi b(θ i ) c(θ i ) f(y i ;θ i,φ) = exp +d(y i,φ) r i (φ) where r i (φ) is a function of the dispersion parameter. Notes: For Poisson and Binomial r i (φ) = 1. If φ is known and r i (φ) = r(φ), then back to one-parameter exponential family. With this generalization... C.J. Anderson (Illinois) Introduction to GLM Fall 2018 31.1/ 36

Normal Distribution f(y;µ;σ 2 ) = ( ) 1 1 (2πσ 2 exp µ)2 ) 1/2 2σ2(y θ (parameter of the distribution) is µ, the mean. The variance σ 2 is considered a nuisance parameter. Putting f(y;µ) into it s canonical form. ( ( f(y;µ;σ 2 ) = exp log 1 (2πσ 2 ) 1/2 )) ( ( = exp log (2πσ 2 ) 1/2)) exp [ yµ µ 2 /2 = exp 2σ 2 = exp [ yb(µ) c(µ) r(σ 2 )+d(y i,σ 2 ) ( 1 exp 2σ ( 1 2σ ) µ)2 2(y ) µ)2 2(y ] 1/2(log(2πσ 2 )) y2 2σ ] 2 C.J. Anderson (Illinois) Introduction to GLM Fall 2018 32.1/ 36

The Canonical Form of the Normal Distribution [ yµ µ f(y;µ;σ 2 2 ] /2 ) = exp 2σ 2 1/2(log(2πσ 2 )) y2 2σ [ ] 2 yb(µ) c(µ) = exp r(σ 2 +d(y i,σ 2 ) ) where So, a(µ) = y b(µ) = µ c(µ) = µ 2 /2. d(y;φ) = 1/2(log(2πσ 2 )) y 2 /(2σ 2 ). r(σ 2 ) = σ 2 b(µ) = µ is the natural parameter. The canonical link is the identity. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 33.1/ 36

The Normal GLM: Ordinary linear regression Generalized linear models go beyond ordinary linear regression in two ways 1. The random component can be something other than Normal. 2. We can model a function of the mean. GLM have a definite advantage over the traditional way of analyzing non-normal responses (Y). The traditional way to handle non-normal responses: 1. Transform your data so that responses are approximately Normal with constant variance. 2. Use least squares. Transforming to normality with constant variance very rarely works... C.J. Anderson (Illinois) Introduction to GLM Fall 2018 34.1/ 36

Problem with the Traditional Approach A transformation that produces constant variance may not yield normally distributed response. Counts that have a Poisson distribution where E(Y) = µ and Var(Y) = µ. Binomial distributed responses where E(Y) = nπ and Var(Y) = nπ(1 π). Linear models often fit discrete data very badly they can yield predicted values of µ that are outside the range of possible values for Y. Consider counts that have a Poisson distribution where Y 0. Consider Binomial distributed responses where 0 π 1. Linear models can yield negative predictions. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 35.1/ 36

Advantage of GLM over Traditional Regression You don t have to transform Y to normality. The choice of link is separate from choice of random component. If the link produces additive effects, then don t need constant variance. (I ll show an example of this next week). The models are fit using maximum likelihood. Thus optimal properties of estimators. Next we ll now talk about GLMS for 1. Dichotomous (binary) data linear, logit, probit and logistic regression models. (introduce them now and go into much more detail later). 2. Poission regression for count data these are very similar to regression that you are familiar with, but with a twist. C.J. Anderson (Illinois) Introduction to GLM Fall 2018 36.1/ 36