time = c(4.5, 3.9, 1.0, 2.3, 4.8, 0.1, 0.3, 2.0, 0.8, 3.0, 0.3, 2.1) answer = c( 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0)

Similar documents
Final Exam. Name: Solution:

STAT 526 Spring Final Exam. Thursday May 5, 2011

ISQS 5349 Spring 2013 Final Exam

Closed book, notes and no electronic devices. 10 points per correct answer, 20 points for signing your name.

Lecture 10: Alternatives to OLS with limited dependent variables. PEA vs APE Logit/Probit Poisson

Generalized Linear Models for Non-Normal Data

DIFFERENTIAL EQUATIONS

Chapter 5. Transformations

Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you.

Typical Survival Data Arising From a Clinical Trial. Censoring. The Survivor Function. Mathematical Definitions Introduction

Introducing Generalized Linear Models: Logistic Regression

MAS3301 / MAS8311 Biostatistics Part II: Survival

Survival Analysis I (CHL5209H)

Ron Heck, Fall Week 8: Introducing Generalized Linear Models: Logistic Regression 1 (Replaces prior revision dated October 20, 2011)

β j = coefficient of x j in the model; β = ( β1, β2,

17. Introduction to Tree and Neural Network Regression

Business Statistics. Lecture 5: Confidence Intervals

14.2 THREE IMPORTANT DISCRETE PROBABILITY MODELS

Survival Regression Models

Part [1.0] Measures of Classification Accuracy for the Prediction of Survival Times

Semiparametric Regression

Chapter 6. Logistic Regression. 6.1 A linear model for the log odds

Logistic & Tobit Regression

In contrast, parametric techniques (fitting exponential or Weibull, for example) are more focussed, can handle general covariates, but require

ECON 594: Lecture #6

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

Multistate Modeling and Applications

Distribution Fitting (Censored Data)

Part 3: Parametric Models

Binary Logistic Regression

Discrete Distributions

Fitting Cox Regression Models

Lecture 22 Survival Analysis: An Introduction

Survival analysis in R

POL 681 Lecture Notes: Statistical Interactions

12 Generalized linear models

Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses

McGill University. Faculty of Science. Department of Mathematics and Statistics. Statistics Part A Comprehensive Exam Methodology Paper

GOV 2001/ 1002/ E-2001 Section 3 Theories of Inference

Semiparametric Generalized Linear Models

7.1 The Hazard and Survival Functions

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Chapter 9. Polynomial Models and Interaction (Moderator) Analysis

Chapter 13 - Inverse Functions

Lecture 4 - Survival Models

ST745: Survival Analysis: Parametric

Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices.

CS 5014: Research Methods in Computer Science. Bernoulli Distribution. Binomial Distribution. Poisson Distribution. Clifford A. Shaffer.

MAS3301 / MAS8311 Biostatistics Part II: Survival

Continuous random variables

Exploratory Factor Analysis and Principal Component Analysis

Survival analysis in R

Survival Analysis. STAT 526 Professor Olga Vitek

Inference with Simple Regression

Model Estimation Example

Quantitative Understanding in Biology 1.7 Bayesian Methods

ECON 497 Midterm Spring

Lecture 7 Time-dependent Covariates in Cox Regression

The coxvc_1-1-1 package

Lecture 14: Introduction to Poisson Regression

Modelling counts. Lecture 14: Introduction to Poisson Regression. Overview

ISQS 5349 Final Exam, Spring 2017.

Econometrics II. Seppo Pynnönen. Spring Department of Mathematics and Statistics, University of Vaasa, Finland

LECTURE 15: SIMPLE LINEAR REGRESSION I

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur

Gov 2000: 9. Regression with Two Independent Variables

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Chapter 0: Some basic preliminaries

General structural model Part 2: Categorical variables and beyond. Psychology 588: Covariance structure and factor models

Checking the Poisson assumption in the Poisson generalized linear model

Regression, Part I. - In correlation, it would be irrelevant if we changed the axes on our graph.

Department of Statistical Science FIRST YEAR EXAM - SPRING 2017

Chapter 5 Simplifying Formulas and Solving Equations

CPSC 340: Machine Learning and Data Mining

For more information about how to cite these materials visit

Chapter 11. Regression with a Binary Dependent Variable

STAT 6350 Analysis of Lifetime Data. Probability Plotting

Physics 509: Bootstrap and Robust Parameter Estimation

Overview. Overview. Overview. Specific Examples. General Examples. Bivariate Regression & Correlation

Hypothesis testing, part 2. With some material from Howard Seltman, Blase Ur, Bilge Mutlu, Vibha Sazawal

Ch. 16: Correlation and Regression

Diagnostics and Transformations Part 2

Business Statistics. Lecture 10: Course Review

Robustness and Distribution Assumptions

Gov 2001: Section 4. February 20, Gov 2001: Section 4 February 20, / 39

Lecture Notes 12 Advanced Topics Econ 20150, Principles of Statistics Kevin R Foster, CCNY Spring 2012

Chapter 1 Review of Equations and Inequalities

Time-Invariant Predictors in Longitudinal Models

1 Correlation and Inference from Regression

Multistate models and recurrent event models

6.867 Machine Learning

SIMULATION SEMINAR SERIES INPUT PROBABILITY DISTRIBUTIONS

REGRESSION ANALYSIS FOR TIME-TO-EVENT DATA THE PROPORTIONAL HAZARDS (COX) MODEL ST520

Statistical Distribution Assumptions of General Linear Models

Exploratory Factor Analysis and Principal Component Analysis

Lecture 10: Introduction to Logistic Regression

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras

Chapter 26: Comparing Counts (Chi Square)

Exam Applied Statistical Regression. Good Luck!

Central Limit Theorem and the Law of Large Numbers Class 6, Jeremy Orloff and Jonathan Bloom

Transcription:

Chapter 15. Censored Data Models Suppose you are given a small data set from a call center on how long the customers stayed on the phone, until either hanging up or being answered. The data set looks like this: Customer Time Waiting (Minutes) Hung Up Before Answer? 1 4.5 Yes 2 3.9 No 3 1.0 No 4 2.3 No 5 4.8 Yes 6 0.1 No 7 0.3 Yes 8 2.0 Yes 9 0.8 No 10 3.0 Yes 11 0.3 No 12 2.1 Yes Here is the data set in R code. time = c(4.5, 3.9, 1.0, 2.3, 4.8, 0.1, 0.3, 2.0, 0.8, 3.0, 0.3, 2.1) answer = c( 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0) Notice that the answer variable has Hung up the phone coded as 0 and Was answered coded as 1. This variable is an indicator (0/1) variable that indicates whether the observation was directly observed (answer = 1), or censored (answer = 0). A censored observation is one that is not directly observed: You only know that the observation lies in some known range of possible values. In the call center data above, the actual time to answer is not observed for those customers who hung up the phone: All you know is that the time to answer had to be greater than the time at which they hung up the phone. Such data are called right censored. Now, suppose you wish to estimate the mean waiting time until being answered by customer support. How should you treat the censored cases, where the customer hung up before getting answered? If you ignore the censoring and use all the data in the Time Waiting column to estimate the average, you will get 2.091667 minutes (mean(time)) as an estimate. But this is clearly biased low, because the time it would have taken customer 1 to be served is not 4.5 minutes, it is some unknown number greater than 4.5 minutes. Similarly, the time it would have taken customer 7 to be served is some unknown number greater than 0.3 minutes, and so on. You might consider simply deleting all the censored cases, getting an average of 1.4 (mean(time[answer==1])) for the 6 remaining cases where the customer was answered. But this may give an even more biased result, because typically, people hang up when they have been on the phone too long. Not always, of course someone may have to run to the bathroom or something, perhaps like customer 7. But as long as there is a general tendency for people who have been on the phone too long with no answer to hang up, which seems reasonable here, then 1

deleting cases where people hung up will bias the resulting estimate of the mean. What should you do? For the people who hung up, you need a reasonable way to use their data. Once again, you can use maximum likelihood to solve this dilemma. To introduce the ideas, see Figure 15.0.1, which shows the (hypothetical) exponential distribution with (hypothetical) mean µ = 4.0. If this were the true distribution of call times until actual answer, then you can assume that the actual time for customer 1 to be answered will be some number that is randomly sampled from the curve above 4.5. R code for Figure 15.0.1 time = seq(0,15,.01) pdf.time = dexp(time, 1/4) # The parameter is lambda = 1/mu plot(time, pdf.time, type="l", yaxs="i", ylim = c(0, 1.2*max(pdf.time)), xaxs="i", xlim = c(0,15.5)) points(c(4.5,4.5), c(0, dexp(4.5,1/4)), type="l") points(c(3.9,3.9), c(0, dexp(3.9,1/4)), type="l", lty=2) x = seq(4.5, 15,.1) y = dexp(x,1/4) col1 <- rgb(.2,.2,.2,.5) polygon(c(x,rev(x)), c(rep(0,length(y)), rev(y)), col=col1, border=na) 2

Figure 15.0.1. The (hypothetical) exponential distribution with (hypothetical) mean 4.0. Customer 1 hung up after waiting 4.5 minutes, so his or her true time is some number under the shaded area, and that area is his or her contribution to the likelihood function. Customer 2 was answered at 3.9 minutes, so that customer s contribution to the likelihood function is the height of the curve atop the dashed line. In Figure 15.0.1, customer 1 s actual response, if he or she was eventually answered, is some number to the right of 4.5; hence, the probability associated with this customer s true response is 3

the area under the curve to the right of 4.5. For customers who answered the phone, the actual density, rather than the area under the density, is the contribution to the likelihood function 1. Here is how you construct the likelihood function in this case: Obs Time (Y) Answer (Censoring Indicator) Contribution to Likelihood Function 1 4.5 0 Pr(Y > 4.5 θ ) 2 3.9 1 p(3.9 θ ) 3 1.0 1 p(1.0 θ ) 4 2.3 1 p(2.3 θ ) 5 4.8 0 Pr(Y > 4.8 θ ) 12 2.1 0 Pr(Y > 2.1 θ ) The likelihood function is then L(θ data) = Pr(Y > 4.5 θ ) p(3.9 θ ) p(1.0 θ ) p(2.3 θ ) Pr(Y > 4.8 θ ) Pr(Y > 2.1 θ ). There are infinitely many choices for p(y θ ); popular ones include Weibull, exponential, Gaussian (aka normal), logistic, lognormal, and loglogistic. The (hypothetical) exponential distribution shown in Figure 15.0.1 has density p( y λ ) = λ e -λy, for y > 0, where the parameter λ is the reciprocal of the mean (λ = 1/µ). The contribution to the likelihood of an observation censored at y is Pr(Y > y λ) = λ e λt dt = e -λy. In general, the function S(y) = Pr(Y > y) is called the survivor function, because if the Y variable is time lived, S(y) is the probability that a person survives y (years, perhaps). A right censored value at 4.5 would indicate that the person survived at least 4.5 years, and then the survivor function, S(4.5), gives his or her contribution to the likelihood. Thus, the likelihood function for the call center data when assuming an exponential distribution is given by y 1 While it may seem like an apples to oranges to have some contributions to the likelihood be probability values (areas under the curve), and others be density values, which are not probabilities, the resolution is that the density value times is the approximate probability within a ± /2 range of the observed value. You could let the observed (non-censored) data contribute p(y θ) to the likelihood function, and then all of the contributions would be probabilities. But is just a constant, like 0.001, and therefore will not affect the maximum likelihood estimate of θ, so it makes sense just to use the actual density value in these cases. 4

L(λ data) = e -λ(4.5) λe -λ(3.9) λe -λ(1.0) λe -λ(2.3) e -λ(4.5) e -λ(2.1). To estimate the parameter λ you can use the maxlik function: loglik <- function(lambda) { answer*log(dexp(time, lambda)) + (1-answer)*log(1-pexp(time, lambda)) } library(maxlik) ## The starting value is just a good "guess" of lambda. results <- maxlik(loglik, start=c(.5)) summary(results) The results are: -------------------------------------------- Maximum Likelihood estimation Newton-Raphson maximisation, 4 iterations Return code 1: gradient close to zero Log-Likelihood: -14.58665 1 free parameters Estimates: Estimate Std. error t value Pr(> t) [1,] 0.23904 0.09759 2.449 0.0143 * --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 -------------------------------------------- The maximum likelihood estimate of the parameter λ of the exponential distribution is λˆ = 0.23904. Thus 2 the maximum likelihood estimate of the mean time to be answered by a customer support representative is µˆ = 1/0.23904 = 4.18 minutes. Compared to the other estimates of the mean time, 2.09 and 1.4 minutes, which ignored or incorrectly handled the censoring, the ML estimate clearly incorporates the fact that the true times of the censored observations are greater than the reported time. As you might guess, there s an app for that. Actually there are many censored data is a popular topic! You can see from the example above, you cannot censored data using any datacentered method. Instead, you absolutely must consider the data generating process Y ~ p(y). You just can t avoid it. You do not have to program the likelihood-based analysis yourself. Instead, you can use the survreg function, which is located in the survival library, as follows. library(survival) cens.fit = survreg(surv(time, answer) ~ 1, dist = "exponential") summary(cens.fit) 2 It is a happy mathematical fact that the MLE of a function f (θ ) is the function f (θˆ ) of the MLE θˆ. 5

The results are: Call: survreg(formula = Surv(time, answer) ~ 1, dist = "exponential") Value Std. Error z p (Intercept) 1.43 0.408 3.51 0.000456 Scale fixed at 1 Exponential distribution Loglik(model)= -14.6 Loglik(intercept only)= -14.6 Number of Newton-Raphson Iterations: 5 n= 12 This model uses the logarithmic link function, and there are no predictor variables, just the intercept term: ln(µ) = β0 Thus, the estimated mean of the distribution is exp(cens.fit$coefficients), giving 4.18, exactly as we got above doing the likelihood calculations by hand. Also, note that the log likelihood, -14.6, is also identical to what we got by hand. The maximum likelihood method of handling censored data is highly dependent on the choice of distribution. Fortunately, you can compare the fits of different distributions using penalized log likelihoods, as shown in Table 15.0.1. You can see that the exponential distribution fits best, but there is little practical difference in the degree of fit for all of the top four in the list. And, keep in mind that the AIC statistic cannot prove which distribution is right, only which ones are more consistent with your idiosyncratic data. Table 15.0.1. Fits of different distributions in the call center data with censored observations. Distribution AIC Exponential 31.173 Lognormal 31.540 Loglogistic 32.775 Weibull 32.791 Gaussian 38.942 Logistic 39.610 15.1 Regression Analysis with Censored Data The analysis developed in the introductory section above extends easily to the regression case. The regression model is, as always, p(y x). The distributions p(y x) that you might use, as mentioned above, include Weibull, exponential, Gaussian (aka normal), logistic, lognormal, and loglogistic. You must string these distributions together as a function of X = x to get the regression model p( y x). You can do this by specifying that the link function of the location parameter (which may or may not be precisely the mean of p(y x), depending on the distribution) is a linear function of X = x. 6

As always, we start simply. Suppose we have no censored data. Then we are right back to where we were before this chapter, except now we have a software tool that can fit a variety of distributions p(y x). Consider the lognormal regression model introduced in Chapter 5, under the heading Comparing log likelihoods with the income/dependents data set : charity = read.csv("http://westfall.ba.ttu.edu/isqs5347/charitytax.csv") attach(charity) plot(deps, Income.AGI) ln.char = log(income.agi) fit.orig = lm(income.agi ~ DEPS) fit.trans = lm(ln.char ~ DEPS) LL.orig = loglik(fit.orig) LL.trans = loglik(fit.trans) - sum(log(income.agi)) LL.orig; LL.trans; LL.trans - LL.orig The results were: Normal (Gaussian) model log likelihood: -5381.359 Lognormal model log likelihood: -5294.951 The lognormal model fit the data much better, with log likelihood that is 86.4 points higher than for the Gaussian model. You may have been a little uncomfortable with the by hand nature of the log-likelihood calculation for the lognormal model shown in the above code. No worries, because with R, there s usually an app for that : If there is something you want, probably someone else wanted it to, and has coded it up for general use. Here, you can get those log-likelihoods easily using the survreg function. Since all data are observed and none are censored, I created a censoring variable called observed with all 1 s to indicate such. n = nrow(charity) observed = rep(1,n) fit.normal =survreg(surv(income.agi, observed) ~ DEPS, dist = "gaussian") fit.lognormal =survreg(surv(income.agi, observed) ~ DEPS, dist = "lognormal") These fits give you precisely the same log likelihood statistics as shown above and in Chapter 5, so no manual calculation of log-likelihoods is needed. And, if you are curious about other distributions, you can always give them a try. For example, the exponential distribution fits the data relatively much worse than either normal or lognormal, with a log likelihood of -5486.103. The Weibull model fits better than the normal distribution model, with log likelihood -5329.956, but still not as good as the lognormal model. 7

Example: Survival of Marriage as a Function of Education At the time of writing this book, the divorce data set was freely available on Germán Rodríguez s teaching web page at Princeton University. The data set contains information on: id: a couple number. heduc: education of the husband, coded 0 = less than 12 years, 1 = 12 to 15 years, and 2 = 16 or more years. heblack: coded 1 if the husband is black and 0 otherwise mixed: coded 1 if the husband and wife have different ethnicity (defined as black or other), 0 otherwise. years: duration of marriage, from the date of wedding to divorce or censoring (due to widowhood or interview). div: the failure indicator, coded 1 for divorce and 0 for censoring. You can read this data set as follows: divorce = read.table("http://westfall.ba.ttu.edu/isqs5349/rdata/divorce.txt") The censoring variable is 0 if one of the couple died during the study period, it is also 0 if the couple was still married at the time the study ended. The dependent variable, Y = years until divorce, is equal to the value of the years variable when div = 1, and is known to be greater than the value of the years variable when div = 0. This brings up an interesting point: Why should Y = (years until divorce) be considered as necessarily having a finite result? The usual probability models p(y) for Y presume that people will get divorced eventually, because the usual models all assume Pr(Y < ) = 1.0. It might make more sense to model Y using a distribution p(y) that puts positive probability on the outcome Y =, which would allow that, in the extremely hypothetical case of couples living forever, some might never divorce. Such distributions exist! But we will use the standard types of distributions, which assume Pr(Y < ) = 1.0, while acknowledging that these distributions may be somewhat restrictive in that Y = is disallowed. Another case where Y = seems plausible is in the (classical, for time-to-event studies) example where Y = time until declared bankruptcy of a business. Why should a business necessarily declare bankruptcy at some point in time? It might never happen, in which case Y =. This technicality about whether Y = is possible does not occur in the classical survival analysis, where Y = time until death. In that case, there is no debate about whether Y = is possible: Like taxes, death is certain. To start, let s model the effect of education on the distribution of time until divorce. The education variable is ordinal, but there are only three levels, so we can use an indicator variable model rather than a linear model without much loss of parsimony (only one more parameter). The indicator variable also allows non-linear effects of education on the divorce distributions. 8

As a first step in any data analysis, you should become acquainted with your data. divorce = read.table("http://westfall.ba.ttu.edu/isqs5349/rdata/divorce.txt") attach(divorce) head(divorce) table(div) The results are id heduc heblack mixed years div 1 9 12-15 years No No 10.546 No 2 11 < 12 years No No 34.943 No 3 13 < 12 years No No 2.834 Yes 4 15 < 12 years No No 17.532 Yes 5 33 12-15 years No No 1.418 No 6 36 < 12 years No No 48.033 No And also: div No Yes 2339 1032 The table(div)output shows that there are 1,032 divorces (uncensored observations) and 2,339 censored observations, where one member of the couple died, or the couple was still married at the end of the study. Let s explore more to understand what kinds of distributions we might use for time until divorce and the censoring times. The results are shown in Figure 15.1.1. par(mfrow=c(1,2)) hist(years[div=="yes"], freq=f, breaks=0:52) hist(years[div=="no"], freq=f, breaks=0:80) 9

Figure 15.1.1. Histograms of time until divorce (left panel) and time until censoring (right panel). The distribution shown in the left panel of Figure 15.1.1 is an estimate of the marginal distribution of Y, but it is biased (shifted to the left) of the distribution we want, which is the distribution of time until divorce that we would get if we were able to observe these actual times for everybody. But notice that it is a distribution on positive numbers, and that it is right skewed, so we want to use a distribution that shares these characteristics, such as the lognormal distribution. The distribution in the right panel of Figure 15.1.1 is the censoring time distribution. It indicates that some couples were married for quite a long time (>60 years) and never divorced. There is no assumption about this distribution, but it is informative about the distribution of Y that we are modeling, because our model assumes that the Y values that we could eventually observe must be greater than the censoring times shown in the right panel of Figure 15.1.1. Among the models available automatically in survreg, the lognormal model fits best by the AIC statistic. Here is the R code to fit the lognormal model for p(y x), where X = education coded with indicator variables. library(survival) div.ind = ifelse(div=="yes",1,0) educ = as.factor(heduc) fit.lognormal =survreg(surv(years, div.ind) ~ educ, dist = "lognormal") summary(fit.lognormal) The results are given as follows: Call: survreg(formula = Surv(years, div.ind) ~ educ, dist = "lognormal") 10

Value Std. Error z p (Intercept) 3.996 0.0688 58.096 0.00e+00 educ12-15 years -0.274 0.0811-3.371 7.49e-04 educ16+ years 0.109 0.1266 0.865 3.87e-01 Log(scale) 0.553 0.0241 22.916 3.22e-116 Scale= 1.74 Log Normal distribution Loglik(model)= -5170.7 Loglik(intercept only)= -5178.9 Chisq= 16.31 on 2 degrees of freedom, p= 0.00029 Number of Newton-Raphson Iterations: 4 n= 3371 To help understand the output, let s first understand the educ variable a little better: table(educ) gives < 12 years 12-15 years 16+ years 1288 1655 428 The left-out category is < 12 years education, which means we can interpret the remaining parameters as differences from that category. The results are interpreted as follows: The mean of the logarithm of time until divorce (including unobserved values that are greater than the censoring limits) is estimated to be 0.274 less for the 12-15 years category than for the < 12 years category. This difference is not easily explained by chance alone, assuming a model where there is no difference between Y distributions for those two groups (p = 7.49 10-4 ). The mean of the logarithm of time until divorce (including unobserved values that are greater than the censoring limits) is estimated to be 0.109 more for the 16+ years category than for the < 12 years category. This difference is explainable by chance alone, assuming a model where there is no difference between Y distributions for those two groups (p = 0.387). The estimated lognormal distributions for the three groups are as follows: Education group µˆ σˆ < 12 years (n = 1,288) 3.996 1.74 12-15 years (n = 1,655) 3.996 0.274 = 3.722 1.74 16+ years (n = 428) 3.996 + 0.109 = 4.105 1.74 When using survival analysis, you typically will present these distributions graphically using their survivor functions, rather than their density functions. The survivor function is defined as S(y) = Pr(Y > y) = 1 Pr(Y y) = 1 P(y), where P(y) is the cumulative distribution function (cdf) of Y. In the present example, the survivor function has an intuitively appealing interpretation: It is the probability that a marriage will survive for y years. Figure 15.1.2 shows the estimated survivor distributions for data in these three groups. 11

R code for Figure 15.1.2 b = fit.lognormal$coefficients b0 = b[1]; b1 = b[2]; b2 = b[3] m1 = b0 m2 = b0+b1 m3 = b0+b2 s = fit.lognormal$scale years.graph = seq(0,70,.1) sdf.y1 = 1 - plnorm(years.graph, m1, s) sdf.y2 = 1 - plnorm(years.graph, m2, s) sdf.y3 = 1 - plnorm(years.graph, m3, s) plot(years.graph, sdf.y1, type="l", ylim = c(0,1), xlab="years Married", ylab="marriage Survival Probability") points(years.graph, sdf.y2, type="l", lty=2) points(years.graph, sdf.y3, type="l", lty=3) abline(v=20, col="red") legend(50,.3, c("< 12 Yrs Educ", "12-15 Yrs Educ", ">15 Yrs Educ"), lty=c(1,2,3)) Figure 15.1.2. Estimated marriage survival functions for three different education groups. From Figure 15.1.2 we see that the highest marriage survival rates are in the highest education group. The middle education group has the lowest marriage survival rate, thus the effect of education on marriage survival is non-linear. Note also, as in all regression models, this does not 12

prove that more education causes greater marriage survival: As always, there may be unobserved confounders. (Self-study exercise. Fill in the blanks: In this study, unobserved confounders might include,, and. Don t include any of the other variables in the data set in your list. Just use your common sense.) The estimated marriage survival probability at 20 years for the low education group is calculated from the lognormal distribution as 1 plnorm(20, m1, s), giving 0.7175. In the other two groups, these probabilities are 0.6621 and 0.7384. Thus, we estimate that 71.75% of marriages in the low education group survive at least 20 years, that 66.21% of the marriages in the middle education group survive at least 20 years, and that 73.84% of the marriages in the high education group survive at least 20 years. You can locate these numbers on the vertical axis of Figure 15.1.2. These predicted percentages of 20-year marriage survival, 71.75%, 66.21%, and 73.84%, all use the combined data from both the non-censored (divorced) people and the censored (died or study ended) people. As indicated in the introduction to this chapter, any data-based estimates that do not handle the censoring correctly will give biased results. For example, consider the lower education group. If you consider only the couples who got divorced in this group, the 20-year survival rate is very low: There were 393 people in this group, and 75 were divorced after 20 years, leading to a 20-year survival rate of 19.1%. But this result is obviously biased low, because it ignores couples who were never divorced at all. (Self-study problem. Find the 19.1% percentage using R.) The parametric survival analysis model assumes that the Y variable (observed or not) follows a particular distribution among the list given above, or of a home-grown variety. We assumed the lognormal distribution in the divorce example, but that assumption is questionable. It is a happy fact that you need not assume anything about the distribution of censoring time, but you still must assume that the censoring is non-informative. That is, the distribution of Y should be the same, regardless of whether the observation is censored. This is a difficult assumption to check, because you do not have the actual Y data from the censored cases. Consider the following scenario in the marriage survival case: Suppose that some of the censored cases due to death of a spouse represent cases where the surviving spouse would have initiated a divorce earlier, but decided to stay in the marriage simply to care for an ailing partner. In such a scenario, the censored cases include cases where the time until divorce would have happened earlier. This means that the censoring is informative about the possible value of Y, and hence the model-based estimates will be biased towards higher marriage survival rates. Another example of informative censoring occurs in clinical trials that study survival after treatment for a life-threatening medical condition. Many patients are censored because they stopped coming to the clinic for follow-up evaluation, so their survival times are unknown except for being greater than the time of their last clinic visit. But this group of drop-outs tends 13

to include people who are not tolerating the treatment very well, and typically die sooner, so the censoring is informative. The estimated survival rates will again be biased on the high side. 15.2 The Proportional Hazards Regression Model A popular alternative to the parametric survival analysis model presented in the previous section is the Cox proportional hazards regression (CPHR) model, named after Sir David Cox, who was made a knight of the British Empire (hence, the Sir ) for developing this methodology. The CPHR model is a standard tool in clinical trials where time until death (or other event, such as next heart attack) is analyzed. The CPHR method has contributed to more lives saved than perhaps any other statistical method. This model is called semi-parametric because it does not require you to specify any distribution of Y, whether lognormal, Weibull, exponential, etc. But it is partially parametric because it assumes a certain linear model relationship between parameters of the distribution of Y and the X variables 3. We will present the model in a slightly different way from other sources. The reason for doing this is in the title of the book, A Conditional Distribution Approach. Throughout this book until now, and even onward, the persistent view of the subject of regression is how to model p(y x), the distribution of Y as it relates to X = x, and that is how we will present the CPHR model as well. The CPHR model can be expressed in terms of the survival functions S(y x), which are directly related to the probability distribution functions p(y x) since Also, ( / y)s(y x) = ( / y){1 P(y x)} = ( / y)p(y x) = p(y x). S(y x) = 1 P(y x) = 1 y p( t x) dt Hence, by specifying the model in terms of S(y x), you are also specifying the model in terms of p(y x): If you know one, then you know the other. Ths CPHR model may be given as follows 4 : S(y X1 = x1,, Xk = xk) = {S0(y)} exp(β1 x1 + + βkxk ) Here, S0(y) = S(y X1 = 0,, Xk = 0) is the baseline survival function, which is the survival distribution of Y when all X variables are 0. 0 3 The Gauss-Markov model is similarly semiparametric because it assumes no particular distribution of Y, but does assume a parametric form of the mean function: E(Y X = x) = b 0 + b 1x. 4 The CPHR model is usually presented as a linear model for the log hazards, which can be difficult to interpret. We present it in terms of survival functions to make the connection with conditional distributions as simply as possible, and to allow more easily understandable graphical presentations of the results of the analysis. 14

The model is nonparametric as regards the baseline distribution S0(y). This distribution can be anything, and is not restricted to lognormal, exponential, or other parametric forms. The estimation procedure is also non-parametric, similar to the empirical distribution function, and even more similar to the version of the empirical distribution function used with censored data that is called the Kaplan-Meier estimate. The parameters (the β s) are estimated using a pseudo-maximum likelihood technique that provides excellent estimates, but unfortunately cannot be compared with the classical likelihood functions that we have discussed previously, so there is no way to tell whether the semi-parametric CPHR model fits the data better than the parametric survival models presented above. Instead, you can compare the results graphically. You can estimate the CPHR model in the marriage survival study as follows. divorce = read.table("http://westfall.ba.ttu.edu/isqs5349/rdata/divorce.txt") library(survival) divorce$survobj <- with(divorce, Surv(years, div == "Yes")) cox.phrm = coxph(survobj ~ heduc, data = divorce) cox.phrm The results are as follows: Call: coxph(formula = SurvObj ~ heduc, data = divorce) coef exp(coef) se(coef) z p heduc12-15 years 0.2388 1.2697 0.0669 3.57 0.00036 heduc16+ years -0.0778 0.9251 0.1080-0.72 0.47106 Likelihood ratio test=17.4 on 2 df, p=0.00017 n= 3371, number of events= 1032 Hence, the estimated model is exp{0.2388i(x = 12-15 years ) -.0778 I(x = > 15 years ) } Ŝ (y Educ = x) = { Ŝ 0 (y)} where Ŝ 0 (y) is the estimated survival function where both indicator variables are zero, i.e., Ŝ 0 (y) is the estimated survival function in the low education group. As is always the case with indicator variables, it is best to separate the model by categories to understand it. Here is the logic. Education Estimated Survival Function < 12 years Ŝ 0 (y) 12 15 years { Ŝ 0 (y)} exp(0.2388) = { Ŝ 0 (y)} 1.270 > 15 years { Ŝ 0 (y)} exp(-0.0778) = { Ŝ 0 (y)} 0.925 15

Now, the survivor function S(y) is a probability function, and therefore always a number between 0 and 1. If you raise a number between 0 and 1 to an exponent greater than 1, you get a smaller number, but still one between 0 and 1. (What happens if you raise 0.7 to the 1.2 power?) And if you raise a number between 0 and 1 to an exponent that is between 0 and 1, you get a larger number, but still one between 0 and 1. (What happens if you raise 0.7 to the 0.9 power?) Hence, you know that the CPHR estimate of survival function for the 12 15 education category is below that of the < 12 category, and that the CPHR estimate of survival function for the > 15 education category is above that of the < 12 category. The discussion above explains a seeming contradiction between the computer output for CPHR and that of the lognormal parametric survival model: The coefficients have opposite signs. A positive coefficient in the CPHR means less survival and a negative coefficient indicates more survival. The Cox model estimating survival functions that look like those in Figure 15.1.2, in the sense that the 12 15 category has the lowest survival and the > 15 category has the highest survival. The results, as always, are best understood graphically. R code for Figure 15.2.1 divorce = read.table("http://westfall.ba.ttu.edu/isqs5349/rdata/divorce.txt") library(survival) divorce$survobj <- with(divorce, Surv(years, div == "Yes")) cox.phrm = coxph(survobj ~ heduc, data = divorce) plot(survfit(cox.phrm, newdata=data.frame(heduc="< 12 years"),conf.int=0),main="cox Model", xlab = "Years Married", ylab="survival Probability",ylim=c(0,1.0)) lines(survfit(cox.phrm, newdata=data.frame(heduc="12-15 years"),conf.int=0), col="red") lines(survfit(cox.phrm, newdata=data.frame(heduc="16+ years"),conf.int=0), col="blue") legend(50,.3, c("< 12 Yrs Educ", "12-15 Yrs Educ", ">15 Yrs Educ"), col=c("black","red","blue"), lty=c(1,1,1)) 16

Figure 15.2.1. Estimated survival curves using the CPHR with the divorce data. You can see that Figure 15.2.1 (semi-parametric survival model) and Figure 15.1.2 (parametric survival model) give similar results. Unfortunately, you can t tell which one is better based on log likelihood. This is just a brief introduction, and not a definitive treatise on survival analysis using CPHR. To learn more about survival analysis, you will need to learn about the hazard function, which is in the very name CPHR, but which we have not included here because it is somewhat of a digression from the main theme of this book. Mainly, we just want you to realize that (i) censored data needs special handling, and (ii) the framework within which to view censored data is the same framework within which to view all of regression; namely, as a model for p(y x). 17

15.3 The Tobit Model In Chapter 14, you learned about models that apply when there are lots of 0 s in your observable Y data; namely, Poisson and negative binomial regression. For such model, the Y data must be integers, and the values of Y should not be too big (say, not more than 100 if you need an ugly rule of thumb) because such values do not occur with the Poisson and negative binomial distributions when there are also 0 s. The Tobit model is used for cases where (i) there are zeros in the Y data, and (ii) the Y data can be (but do not have to be) non-integer, and (iii) the Y data can be numbers that are large, even in the millions or billions. Such cases abound in financial and accounting reporting data. A reported financial or accounting number might be zero for many or even most firms, but can be some large positive amount for others. For example, consider the variable Y = Court-mandated payout due to successful lawsuit in a given year. Most firms report nothing (Y = 0) in this category, but others report varying amounts, in the millions or even billions of dollars, depending on the severity of their infraction. As another example, consider Y = jail time assigned for a convicted defendant. Many convicts are assigned no jail time whatsoever (Y = 0), instead getting probation. Others are assigned jail time with varying lengths. (Self-study question: What X variables might you use to predict Y in the two examples above?) Your main goal of regression (as described persistently in this book) should be to model p(y x), the conditional distribution of Y given X = x. The Tobit model is a model p(y x) that puts a certain probability on the discrete outcome Y = 0, but allows a continuous distribution of Y for Y > 0. For most of you, this is a different animal than you have ever seen or heard of before a probability distribution that has both discrete and continuous components! You might have assumed that probability distributions could be entirely classified as either discrete or continuous, but now you know that there is another possibility: The Tobit specification of p(y x) is neither entirely discrete nor entirely continuous. Figure 15.3.1 shows examples of Tobit distributions p(y x). Since the total probability is 1.0 (that fact is always true), the probability of the 0 (Pr(Y = 0)) plus the area under the curve to the right of zero (Pr(Y > 0)) must always add to 1.0. Thus, as indicated in Figure 15.3.1, as the Tobit distributions p(y x) morph to the right, the probability on the discrete (Y = 0) event must decrease. (And if such behavior is not appropriate for your data-generating process, then you should use an alternative to the classic Tobit model.) R code for Figure 15.3.1 Y.list = seq(0,10,.01) d1 = dnorm(y.list, -2, 3) d2 = dnorm(y.list, 1, 3) par(mfrow=c(1,2)) plot(y.list, d1, type="l", yaxs="i", ylim = c(0,.8), xlim = c(-.5,10), xlab="lawsuit Payout (in Millions of $),y", ylab = "p(y X = Low)") 18

points(c(0,0), c(0, pnorm(0, -2,3)), type="l", lwd=2, col="blue") plot(y.list, d2, type="l", yaxs="i", ylim = c(0,.8), xlim = c(-.5,10), xlab="lawsuit Payout (in Millions of $),y", ylab = "p(y X = High)") points(c(0,0), c(0, pnorm(0, 1,3)), type="l", lwd=2, col="blue") Figure 15.3.1. Tobit distributions p(y x) for different X = x. You might have a hard time distinguishing the Tobit model from the Poisson/negative binomial models. After all, both models are suggested by Y data that have both zeros and positive values. To make this distinction, you should first understand the distributions p(y x) before deciding which model is more appropriate. Compare Figure 15.3.1 with Figure 14.01 and Figure 14.2.1: Which one is most appropriate for your Y data? To define the Tobit distributions shown in Figure 15.3.1, the censoring idea is used. Observations at Y = 0 are considered left censored, as if the actual observation could be negative. This is just a convenient fiction that is used to define the model p(y x), though. Usually, there are no potentially observable negative Y values when you use Tobit models. To define the models p(y x) as shown in Figure 15.3.1, first define a fictitious random variable Z as follows: Z = β0 + β1x + σε, where ε Ν(0, 1). 19

The Tobit distributions p(y x), as shown in Figure 15.3.1, are the distributions of Y X = x, where Y = Z, if Z > 0, and Y = 0, if Z 0 For example, suppose Z = 3 +1.0X + 3ε. Suppose also that X = 1 is a low value of X, and X = 4 is a high value of X. Then the Tobit distributions p(y x) are exactly as shown in Figure 15.3.1. To understand this better, see Figure 15.3.2, which shows simulated Z data, X data, and the associated Y data, according to this model. R code for Figure 15.3.2 X = rnorm(2000, 2.5,1) Z = -3 + 1*X + rnorm(1000, 0, 3) plot(x,z, col="pink", xlab = "Net Profits", ylab="litigation Award (Millions of $)") Y = ifelse(z<0,0,z) points(x,y) abline(v=c(1,4), col="blue", lwd=2) abline(-3,1, col="blue") Figure 15.3.2. Scatterplot illustrating Tobit model. Black points are observable data; note that there are many 0 s. Pink points are fictitious latent values; these don t really exist but are used to define the model p(y x): If the pink point is less than zero, then the observed Y is zero. The distributions p(y X =1) and p(y X = 4) are shown by the locations of the Y data (black points) along the vertical lines; these distributions are exactly the ones shown in Figure 15.3.1. The blue upward sloping line determines the overall level and the rate of morphing of the distributions p(y x) shown in Figure 15.3.1. 20

You can estimate the Tobit model using the survreg function presented above, but you have to specify left censoring rather than the default of right censoring. Example: Predicting number of days lost to back injury Workers compensation is a huge expense. In manufacturing and warehousing, much of this expense is due to lower back injuries. The Industrial Engineering department at a large southwestern university developed an index of job-induced stress (X = Stress Index) on the lower back, and used it to predict Y = number of days lost to back injury, for a collection of warehouse workers. The regression model is p(y X = x,θ ), which is determined from Z = β0 + β1x + σε Y = Z, if Z > 0, Y = 0, if Z 0 While the classical Tobit model assumes a normal distribution for ε, this assumption can easily be relaxed to allow non-normal distributions using the survreg function. The parameters θ = (β0, β1, σ) are estimated using maximum likelihood; as indicated above, censored observations (yi = 0) contribute the probability Pr(Zi 0 xi, θ ) to the likelihood function, while noncensored observations (yi > 0) contribute the density p(zi xi, θ ) to the likelihood function. First (always first!), you should get to know your data. back = read.csv ("http://westfall.ba.ttu.edu/isqs5347/liles.csv") Y = back$dayslost X = back$alr plot(jitter(x), Y, xlab="index of Lower Back Stress", ylab="days Lost") Figure 15.3.3. Work days lost to back injury as a function of index of stress on the lower back. 21

Notice in Figure 15.3.3 that there seems be more workers losing time to back injury for larger back stress, but there is also an extreme outlier in the upper left that goes against this trend. To fit the Tobit model, use the following code. fit.normal <- survreg(surv(y, Y>0, type='left') ~ X, dist='gaussian') summary(fit.normal) The output is: Call: survreg(formula = Surv(Y, Y > 0, type = "left") ~ X, dist = "gaussian") Value Std. Error z p (Intercept) -112.25 30.655-3.66 2.50e-04 X 14.71 8.759 1.68 9.31e-02 Log(scale) 4.08 0.207 19.68 3.45e-86 Scale= 59 Gaussian distribution Loglik(model)= -121.1 Loglik(intercept only)= -122.8 Chisq= 3.47 on 1 degrees of freedom, p= 0.063 Number of Newton-Raphson Iterations: 5 n= 206 The Tobit model estimates that p(y x) is the N( 112.25 + 14.71x, 59 2 ) density for y > 0, with Pr(Y = 0 x) estimated to be the probability that a N( 112.25 + 14.71x, 59 2 ) random variable is less than 0. Figure 15.3.4 shows the estimates of these distributions for X = 0.5 and X = 3.0; the shaded area corresponds to the probability that Y = 0. R code for Figure 15.3.4 ylist = seq(-300, 200,.1) d1 = dnorm(ylist, -112.25 + 14.71*.5, 59) d2 = dnorm(ylist, -112.25 + 14.71*3, 59) par(mfrow=c(1,2)) plot(ylist, d1, type="l", yaxs="i", ylim = c(0,.008), xaxt="n", xlab = "Days Lost, y", ylab = "p(y Stress Index = 0.5)") axis(1, at = c(0,50,100,150,200), lab=c(0,50,100,150,200)) x1 = seq(-300, 0,.1); y1 = dnorm(x1,-112.25 + 14.71*.5, 59) col1 <- rgb(.2,.2,.2,.5) polygon(c(x1,rev(x1)), c(rep(0,length(y1)), rev(y1)), col=col1, border=na) plot(ylist, d2, type="l",yaxs="i", ylim = c(0,.008), xaxt="n", xlab = "Days Lost, y", ylab = "p(y Stress Index = 3.0)") axis(1, at = c(0,50,100,150,200), lab=c(0,50,100,150,200)) x2 = seq(-300, 0,.1); y2 = dnorm(x1,-112.25 + 14.71*3, 59) col1 <- rgb(.2,.2,.2,.5) polygon(c(x2,rev(x2)), c(rep(0,length(y2)), rev(y2)), col=col1, border=na) 22

Figure 15.3.4. Estimated distributions p(y X = x) for Y = days lost to back injury as related to X = lower back stress index. Only the part of the curve for y 0 is part of this distribution. For y > 0, p(y x) is the actual density. For y = 0, the probability is given by the area of the shaded region. Left panel shows estimate of p(y X = 0.5); right panel shows estimate of p(y X = 3.0). Notice in the output that the trend towards higher days lost for larger stress index (represented by the morphing to the right shown in the graphs of Figure 15.3.4) is explainable by chance alone (p = 9.31e-02 = 0.0931 does not meet the standard 0.05 threshold). But note also the outlier in Figure 15.3.3 of y = 152 days. As seen in Figure 15.3.4, this point has very low likelihood under the normal model. Since the technique is maximum likelihood, this point has had inordinate influence in pulling the mean of the distribution upwards, toward the outlier, for smaller X. A better model would be one that accommodates such outliers. In particular, a more heavy-tailed distribution would allow more extreme outliers to exist without pulling the mean function inappropriately toward the outlier. The survreg function allows two such distributions, the T distribution (the software defaults to df = 4, but you can pick other df) and the logistic distribution. These distributions are symmetric and range from to, like the normal distribution, but have heavier tails than the normal distribution, and therefore their likelihood functions are not so influenced by outliers. Changing dist = "gaussian" to dist = "t" and dist = "logistic" gives the following results: Distribution AIC Gaussian 248.1868 T(4) 233.4808 Logistic 240.6989 23

Thus, the T distribution provides the best fit among the three. And, this distribution gives results not so heavily influenced by the outlier, where the apparent trend in the distributions is not so easily explained by chance alone (p = 4.22 10 2 = 0.0422). Call: survreg(formula = Surv(Y, Y > 0, type = "left") ~ X, dist = "t") Value Std. Error z p (Intercept) -37.17 12.984-2.86 4.20e-03 X 6.78 3.335 2.03 4.22e-02 Log(scale) 2.62 0.314 8.34 7.22e-17 Scale= 13.8 Student-t distribution: parmameters= 4 Loglik(model)= -113.7 Loglik(intercept only)= -117 Chisq= 6.48 on 1 degrees of freedom, p= 0.011 Number of Newton-Raphson Iterations: 8 n= 206 While it is sensible that non-normal distributions should be considered in this example, the particular strategy used here (pick model with lowest AIC and use that one) tends to give p- values that are biased low 5. Hence, we could be accused of p hacking in the above to get a p- value (0.0422) that met the standard p < 0.05 threshold. 15.4 Interval Censored Data Have you ever taken a survey where they asked you your income, and you were supposed to select one of the following categories? Income less than 10,000 Income between 10,001 and 20,000 Income between 20,001 and 50,000 Income between 50,001 and 120,000 Income greater than 120,001 These are called interval censored data, because the actual Y value is unknown, except that it lies in a known interval. Did you know that you could estimate the distribution of the actual (unobserved) Y, p(y) using such data? The lognormal model would fit well. You can also model how this distribution morphs with X = x, i.e., you can model the distribution p(y x), using such data. And you can compare different distributions fit to the data using penalized log-likelihoods. For example, consider the following such data based on n =145 survey respondents. 5 You can easily demonstrate this fact by using simulation. 24

Category: Count Income less than or equal 10,000 11 Income between 10,000 and 20,000 32 Income between 20,000 and 50,000 78 Income between 50,000 and 120,000 20 Income greater than 120,000 4 Total 145 Let P(y; θ) denote the cumulative distribution corresponding to whatever model p(y;θ ) you consider; e.g., lognormal. Assuming that the n = 145 potentially observable responses are independent, the likelihood function for the sample is {P(10,000; θ )} 11 {P(20,000; θ ) P(10,000; θ )} 32 {P(50,000; θ ) P(20,000; θ )} 78 {P(120,000; θ ) P(50,000; θ )} 20 {1 P(120,000; θ )} 4 Given a particular distribution p(y;θ ), you can estimate θ by maximizing this likelihood, and you can compare different distributions via their AIC statistics. This also extends to regression, where each individual observation i contributes a different amount to the likelihood, depending on the xi value. Example: Modelling the rating distribution of teaching for a university s faculty over time. A large southwestern university performs exit surveys of its graduate students upon their graduation. One survey item is Rate the Faculty Teaching, to which the students answer 1,2,3,4 or 5, 5 being highest. Another variable is Year, ranging from 1993 to 2002. The question: Have perceptions of the teaching of the university s faculty changed over the time horizon? While these data can be modelled using ordinal logistic regression as shown in the preceding chapter, a related, and in some ways simpler and more intuitive model, is a censored data model. The assumption of the model is that students may have a continuous version of the 1,,5 rating scale in their minds, and they simply round this number off to the nearest integer to arrive at their rating. If a student s continuous rating is, for example, 3.871, then the student rounds that off to 4 and answers 4. Another student may really have liked the teaching, and have a continuous mental rating of 6.231. Since the rounded-off number is 6, which is not on the scale, the student picks the best available option, 5. The model for the latent continuous responses is Z = β0 + β1year + ε, 25

where the ε are assumed to be N(0, σ 2 ). The observed responses, Y, are rounded-off versions of the latent Z: If Z < 1.5, then Y = 1. If 1.5 Z < 2.5, then Y = 2. If 2.5 Z < 3.5, then Y = 3. If 3.5 Z < 4.5, then Y = 4. If 4.5 Z, then Y = 5. The following code estimates the model, compares the model to OLS, and draws comparative graphs. g = read.csv("http://westfall.ba.ttu.edu/isqs6348/rdata/pgs.csv") new.g = data.frame(g$facteaching, g$survyear) new.g = na.omit(new.g) # To removemissing values colnames(new.g) = c("y", "X") attach(new.g) X = X-1993 ## So that 1993 is "year 0" library(maxlik) loglik <- function(param) { b0 = param[1] b1 = param[2] l.sig = param[3] sig = exp(l.sig) mu = b0 + b1*x L = ifelse(y==1, pnorm(1.5, mu, sig), ifelse(y==2, pnorm(2.5, mu, sig) - pnorm(1.5, mu, sig), ifelse(y==3, pnorm(3.5, mu, sig) - pnorm(2.5, mu, sig), ifelse(y==4, pnorm(4.5, mu, sig) - pnorm(3.5, mu, sig), 1 - pnorm(4.5, mu, sig) )))) ll = sum(log(l)) ll } res <- maxlik(loglik, start=c(0,0,0)) summary(res) OLS = lm(y ~X) summary(ols) X.plot = 1993:2002 c = res$estimate c0 = c[1] c1 = c[2] Fit.ML = c0+c1*(x.plot-1993) par(mfrow=c(1,2)) plot(x.plot, Fit.ML, type="l", ylim = c(3.5,4.5), col="red", lty=2, ylab="rating of Teaching", xlab="year") b0 = OLS$coefficients[1] 26

b1 = OLS$coefficients[2] Fit.OLS = b0 + b1*(x.plot-1993) points(x.plot, Fit.OLS, type="l", col="blue") ## Estimated distributions for Year 2000 rating.plot = seq(0,7,.01) plot(rating.plot, dnorm(rating.plot, b0+b1*(2000-1993), summary(ols)$sigma), type="l", col = "blue", yaxs="i", ylim = c(0,.5), ylab="p(y Year=2000)", xlab ="Rating, y") points(rating.plot, dnorm(rating.plot, c0+c1*(2000-1993), exp(c[3])), type="l", lty=2, col="red") points(c(1.5,1.5), c(0,dnorm(1.5, c0+c1*(2000-1993), exp(c[3]))), type="l", lty=2, col="red") points(c(2.5,2.5), c(0,dnorm(2.5, c0+c1*(2000-1993), exp(c[3]))), type="l", lty=2, col = "red") points(c(3.5,3.5), c(0,dnorm(3.5, c0+c1*(2000-1993), exp(c[3]))), type="l", lty=2, col = "red") points(c(4.5,4.5), c(0,dnorm(4.5, c0+c1*(2000-1993), exp(c[3]))), type="l", lty=2, col = "red") Figure 15.4.1. Left panel: Estimated regression functions using OLS (blue solid line) and interval-censored ML (red dashed line) for teacher rating data. Right panel: Estimated ratings distributions in Year 2000 assuming OLS (blue solid curve) and assuming a latent, interval censored rating (red dashed curve). Probabilities as estimated by the interval censored model are areas under the red dashed curve, with boundaries indicated by the red dashed lines. 27

Note the following about Figure 15.4.1. The latent trend line is higher than the OLS line. Recall that the latent trend line is not an estimate of the mean of Y, as is OLS, rather, it is an estimate of the mean of the underlying latent rating. When there are many numbers in the 5 category, the latent rating can be much larger than 5 so that there is a large probability in the 5 category, giving larger average values of Z than of Y. The latent trend line is also steeper than the OLS line. To understand, consider a more extreme case where the ratings become mostly 5 in later years. The OLS fits a line to the actual data, and would have to flatten to accommodate such data. The latent line, on the other hand, would be much steeper, because most of the latent ratings would be well above 4.5 for large X. Hence, the OLS estimated slope can be considered attenuated towards smaller values due to the boundary effect of the Y measurement. The trends in both the OLS and censored data analyses are significant, with p-values 0.032 and 0.021, respectively. Because the boundary constraint attenuated the OLS slope coefficient toward 0.0, it also made the slope less significant. As shown in the right panel of Figure 15.4.1, the implicit normal distribution of the OLS estimated model is clearly wrong, not only because it assumes the Y data are continuous, but also because it predicts data will be more than 5. The latent model, on the other hand, is meaningful even without having to assume the existence of a true latent measurement. As shown in the right panel of the graph, the model gives probabilities for the discrete outcomes Y = 1,, 5 of the observed data. Ordinal logistic regression is a clear competitor to the interval censored data formulation. An advantage of the interval censored formulation is that that latent variable Z is more clearly related to the dependent variable Y, having the same 1,, 5 reference. A disadvantage of the interval censored formulation is that it is more rigid, with pre-defined boundaries between categories. The ordinal logistic regression model, on the other hand, estimates those boundaries using the data to better match the actual proportions in the data set. The function icenreg, available for use with R, accommodates interval-censored data. 28