Binary Response: Logistic Regression. STAT 526 Professor Olga Vitek
|
|
- Paula Sutton
- 6 years ago
- Views:
Transcription
1 Binary Response: Logistic Regression STAT 526 Professor Olga Vitek March 29,
2 Model Specification and Interpretation 4-1
3 Probability Distribution of a Binary Outcome Y In many situations, the response variable has only two possible outcomes Disease (Y = 1) vs Not diseased (Y = 0) Employed (Y = 1) vs Unemployed (Y = 0) Response is binary or dichotomous Can model response using Bernoulli dist Y i Probability 1 Pr{Y 1 = 1} = π i 0 Pr{Y 1 = 0} = 1 π i E{Y i } = π i V ar{y i } = π i (1 π i ) 4-2
4 Goal: express E{Y } as function of a covariate X The simple regression is not appropriate E{Y i } = β 0 + β 1 X i It violates several assumptions: (1) Does not enforce the constraint 0 E{Y i } 1 is (2) Non-normal (binary) distribution of ε X: When Y i = 0 : ε i = 0 β 0 β 1 X i When Y i = 1 : ε i = 1 β 0 β 1 X i (3) Non-constant variance V ar{y i } = π i (1 π i ) = (β 0 + β 1 X i )(1 β 0 β 1 X i ) 4-3
5 Solution: a Generalized Linear Model A generalized linear model is E{Y i } = g(β 0 + β 1 X i ), or g 1 (E{Y i }) = β 0 + β 1 X i where g is a sigmoid function in (0,1). g is called the mean response function g 1 is called the link function A choice of g produces different models g(t) = Identity linear regression g(t) = Φ(t) = standard Normal CDF probit regresison g(t) = exp(t) = CDF of the logistic distrib. 1+exp(t) logistic regresison 4-4
6 Motivation for Probit Regression: Latent Variable Assume that the binary response is guided by a non-observed continuous variable Example: linear model for blood pressure: Only observe bp = β 0 + β 1 age + ε Y = { 1 (disease), if blood pressure > c 0 (healthy), if blood pressure c Pr{Y = 1} = Pr{bp > c} = Pr{β 0 + β 1 age + ε > c} = Pr{ε < β 0 + β 1 age c} = Pr{ ε σ < β 0 c + β 1 σ σ age} = Pr{z < β 0 + β 1 age} = Φ(β 0 + β 1 age) 4-5
7 Logistic Response Function and Logistic Regression A sigmoidal response function E{Y i } = exp(β 0 + β 1 X i ) 1 + exp(β 0 + β 1 X i ) = exp( β 0 β 1 X i )) A monotonic increasing/decreasing function Explicit functional form Restricts 0 E(Y i ) 1 Example of a nonlinear model Logit link function ( ) E{Yi } log = log 1 E{Y i } ( πi 1 π i ) = β 0 + β 1 X i 4-6
8 Probability Distribution of Y in Logistic Regression Y i are independent but not identically distributed Bernoulli random variables Y i ind π i = Bernoulli(π i ) where exp(β 0 + β 1 X i ) 1 + exp(β 0 + β 1 X i ) note no more error term! Probability density of Y i f(y i ) = π Y i i (1 π i) 1 Y i Least Squares Estimates are inappropriate use maximum likelihood for parameter estimation 4-7
9 Simple Logistic Regression: Interpretation of b 1 Fitted value for the individual i ˆπ i = eb 0 +b 1 X i 1+e b 0 +b 1 X i Fitted logistic response (i.e. fitted log odds) ( log e odds(x i ) ) ˆπ = log i (X i ) e = b 0 + b 1 X i 1 ˆπ i (X i ) b 1 is the slope of the fitted logistic response b 1 is interpreted as log(odds ratio) ( ) ˆπi (X i + 1) b 1 = log e 1 ˆπ i (X i + 1) ( ) odds(xi + 1) = log e odds(x i ) odds ratio(x i ) = exp(b 1 ) log e ( ˆπi (X i ) 1 ˆπ i (X i ) ) 4-8
10 Estimation by Maximum Likelihood Y i ind Bernoulli(π i ) where π i = exp(β 0 + β 1 X i ) 1 + exp(β 0 + β 1 X i ) Log likelihood: log e (L) = = log = = = { n n Y i log(π i ) + π Y i i (1 π i) 1 Y i } n (1 Y i ) log(1 π i ) n π i Y i log( ) + 1 π i n Y i (β 0 + β 1 X i ) n log(1 π i ) n log(1 + exp(β 0 + β 1 X i )) MLEs do not have closed forms 4-9
11 Equivalent specification: Binomial distribution Change in notation Data: (Y ij, n i, X i ), i = 1, 2,, c X i : predictor for observation i n i : # of Bernoulli trials in observation i Y i := n i Y ij j=1 Model: Y i ind π i = Binomial(n i, π i ), where exp(β 0 + β 1 X i ) 1 + exp(β 0 + β 1 X i ) Log-Likelihood: log e (L) = c {( ) } ni = log Y i π Y i (1 πi ) n i Y i c { = Y i log(π i) + (n i Y i ) log(1 π i) + log = c { Y i log π ( i ni + n i log(1 π i ) + log 1 π i Y i ( ni Y i )} 4-10 )}
12 Equivalence of Bernouilli and Binomial Models Binomial Log-Likelihood equals Bernouilli Log-Likelihood, up to a constant: log e (L) Binomial = = = = c { Y i log(π i) + (n i Y i ) log(1 π i) } + constant c n i Y ij log(π i ) + (n i c n i j=1 j=1 { π i n i j=1 Y ij log( ) + log(1 π i ) 1 π i Y ij ) log(1 π i ) + constant } + constant = log e (L) Bernouilli + constant Both models lead to same parameter estimates and inferences, but have different deviances. 4-11
13 Prospective and Retrospective Studies Prospective study: the outcome fix predictors, observe Recruit patients with 2 genotypes, compare occurrence of disease. Expensive; large variance for rare diseases Retrospective (=case-control) study: outcome, observe predictors fix Recruit patients with and without the disease, compare the genotypes. Cheaper Log odds ratio based on logistic regression is the same for both studies. Not true for other link functions 4-12
14 Prospective Model With Retrospective Data: Formulation Assume a prospective model: π(x i ) = P (Y i = 1 X i ) = eβ 0+β 1 X i 1 + e β 0+β 1 X i However, the subjects are selected retrospectively. The random variable is Z i = 1 {including subject i}, conditional on Y. Denote θ 0 = P (Z i = 1 Y i = 0) and θ 1 = P (Z 1 = 1 Y i = 1) Note that θ 0 and θ 1 are indenpent of X i, otherwise introduce selection bias. Of interest in retrospective study is P (Y i = 1 X i, Z i = 1) 4-13
15 Prospective Model With Retrospective Data: Estimation of log(or) With retrospective sampling, we model: P (Y i = 1 X i, Z i = 1) = P (Y i = 1, Z i = 1 X i ) P (Z i = 1 X i ) P (Y i = 1, Z i = 1 X i ) = P (Y i = 0, Z i = 1 X i ) + P (Y i = 1, Z i = 1 X i ) θ 1 π(x i ) = θ 0 {1 π(x i )} + θ 1 π(x i ) = θ 1 e β 0+β 1 X i θ 0 + θ 1 e β 0+β 1 X i = elog(θ1/θ0)+β0+β1xi 1 + e log(θ 1/θ 0 )+β 0 +β 1 X i Uncover the same β 1 as in the prospective study { } P (Yi = 1 X i, Z i ) = log = [log(θ 1 /θ 0 ) + β 0 ]+β 1 X i P (Y i = 0 X i, Z i ) 4-14
16 Multiple Logistic Regression Extension of the response function: exp(x i E{Y i } = β) 1 + exp(x i β) Extension of the logit link function: ( ) ( ) E{Yi } πi log e = log e 1 E{Y i } 1 π i = X i β The multivariate likelihood function: log e L(β) = n Y i (X i β) n log e [1 + exp(x i β)] Interpretation of b j : log e ( odds(xj + 1) odds(x j ) while other predictors are held fixed ) 4-15
17 Testing 4-16
18 Asymptotic Properties of β Asymptotic existence and uniqueness: P { β exists and is unique 1} as n Consistency: β β as n Asymptotic Normality: β Ass. N (β, I( β) 1 ) as n Asymptotic efficiency: The MLE has asymptotically smaller variance than many other estimators 4-17
19 Inference About Individual β j : Wald Test Test H 0 : β j = 0 versus H a : β j 0. Test statistic z = b j 0 s{b j } Approximate variance s 2 {b} s 2 {b} = 2 log e L(β) β j β j β=b 1 Approximate distribution of z z N (0, 1). Alternatively, (z ) 2 χ 2 1 reject H 0 if z > z 1 α/2 CI for β j : b j ± z 1 α/2 s{b j } 4-18
20 Simultaneous Inference About Several β j = 0: Likelihood Ratio Test Multivariate logistic regression ( ) E{Yi } log = β 0 + β 1 X i1 + + β p 1 X i,p 1 1 E{Y i } Test H 0 : β 1 = β 2 = = β q = 0 versus H a : not all β 1, β 2,, β q = 0 Test statistic [ ] L(reduced model) G 2 = 2 log e L(full model) = 2 [log e L(reduced model) log e L(full model)] Approximate distr of G 2 for large n Reject H 0 if G 2 > χ 2 (1 α, q) 4-19
21 Comments: Wald Test Versus Likelihood Ratio Test Both tests are approximate, for large n Likelihood Ratio test: β j = 0 simultaneously β j = 0, or several no other values of β j no one-sided tests, or linear com- Wald test: β j = β binations of β j of interest j When testing a single H 0 : β j = 0, the tests may lead to different conclusions due to the approximate nature of the tests unlike in linear regression 4-20
22 Quality of Fit: Replicated data 4-21
23 H 0 : H a : Lack of Fit for Replicated Data: Pearson χ 2 ( ) E{Yij } log = β 0 + β 1 X i1 + + β p 1 X i,p 1 vs 1 E{Y ij } ( ) E{Yij } log β 0 + β 1 X i1 + + β p 1 X i,p 1 1 E{Y ij } c covariate configurations, each with n j cases Observed counts O i0 : observed # of 0 s in configuration i O i1 : observed # of 1 s in configuration i Expected counts E i0 = n i (1 ˆπ i ): expected # of 0 s in conf. i E i1 = n i (ˆπ i ): expected # of 1 s in conf. i Test statistic: reject H 0 if c 1 X 2 (O jk E jk ) 2 = > χ 2 (1 α, c p) E jk k=0 4-22
24 Lack of Fit for Replicated Data: Deviance ( ) E{Yij } H 0 : log = β 0 + β 1 X i1 + + β p 1 X i,p 1 vs 1 E{Y ij } ( ) E{Yij } H a : log β 0 + β 1 X i1 + + β p 1 X i,p 1 1 E{Y ij } c covariate configurations, each with n i cases H 0 : model of interest; H a : saturated model Model of interest E{Y ij } = π i ; E{Yij } = ˆπ i Saturated model E{Y ij } = p i ; E{Yij } = ˆp i = Y ij j n i Test statistic: LR, also called deviance Deviance of a saturated model always =
25 Lack of Fit for Replicated Data: Deviance G 2 = DEV (X 0, X 1,..., X p 1 ) = 2 [ log e L(current model) log e L(saturated model) ] = 2 +2 = 2 c c c n i j=1 n i j=1 n i j=1 Y ij log e ˆπ i + (n i Y ij log e ˆp i + (n i n i j=1 n i j=1 ) (ˆπi Y ij log e + (n i ˆp i Y ij ) log e (1 ˆπ i ) Y ij ) log e (1 ˆp i ) n i j=1 Reject H 0 if G 2 > χ 2 (1 α, c p) ( ) 1 ˆπi Y ij ) log e 1 ˆp i Approximation χ 2 (1 α, c p) can be poor The closer the distribution to Gaussian, and the closer the link to identity, the better the approximation Unlike with the LR test, the quality of approximation does not improve with the sample size 4-24
26 Note: Can Use Deviance for LR Test of Nested Models log ( E{Yij } 1 E{Y ij } ) = β 0 + β 1 X i1 + + β p 1 X i,p 1 Test H 0 : β 1 = β 2 = = β q = 0 versus H a : not all β 1, β 2,, β q = 0 Likelihood Ratio test statistic G 2 = [ ] L(reduced model) = 2 log e L(full model) = 2 [log e L(reduced model) log e L(full model)] = 2 [log e L(reduced model) log e L(saturated model)] +2 [log e L(full model) log e L(saturated model)] = Deviance(reduced model) Deviance(full model) Approximate distr of G 2 for large n Reject H 0 if G 2 > χ 2 (1 α, q) 4-25
27 Quality of Fit: Individual Observations 4-26
28 H 0 : H a : Non-Replicated Data: Hosmer-Lemeshow Goodness of Fit ( ) E{Yi } log = β 0 + β 1 X i1 + + β p 1 X i,p 1 vs 1 E{Y i } ( ) E{Yi } log β 0 + β 1 X i1 + + β p 1 X i,p 1 1 E{Y i } Group cases based on values of estimated probabilities ˆπ i into c groups E.g., find c = 9 groups based on percentiles Apply Pearson χ 2 test to the groups Reject H 0 if X 2 > χ 2 (1 α, c 2) showed by simulation that this distribution is appropriate 4-27
29 Can Write Pearson χ 2 (But Not Use for Tests) Test statistic: X 2 = = c 1 k=0 c (O ik E ik ) 2 E ik (O i0 E i0 ) 2 E i0 + c (O i1 E i1 ) 2 E i1 The corresponding quantities (KNNL p. 591): c = n, n i = 1 O i0 = 1 Y i, O i1 = Y i E i0 = 1 ˆπ i, E i1 = ˆπ i Test statistic: X 2 = = n n [(1 Y i ) (1 ˆπ i )] 2 1 ˆπ i + (Y i ˆπ i ) 2 1 ˆπ i + n n (Y i ˆπ i ) 2 (Y i ˆπ i ) 2 ˆπ i = ˆπ i n (Y i ˆπ i ) 2 ˆπ i (1 ˆπ i ) 4-28
30 Can Write Deviance (But Not Use for Tests) Test statistic G 2 = DEV (X 0, X 1,..., X p 1 ): c = 2 n i j=1 ) (ˆπi Y ij log + (n i ˆp i n i j=1 Y ij ) log ( ) 1 ˆπi 1 ˆp i The corresponding quantities (KNNL p. 592): c = n, n i = 1, n i j=1 Y ij = Y i, ˆp i = n i j=1 Y ij /n i = Y i Test statistic: n [ ) ( ) ] (ˆπi 1 G 2 ˆπi = 2 Y i log + (1 Y i ) log Y i 1 Y i n = 2 [ Y i log ˆπ i + (1 Y i ) log(1 ˆπ i ) Y i log Y i (1 Y i ) log(1 Y i ) ] n = 2 [ Y i log ˆπ i + (1 Y i ) log(1 ˆπ i ) ] 4-29
31 Diagnostics: Residuals Logistic Regression Residuals e i = Pearson residual { 1 ˆπi, if Y i = 1 ˆπ i, if Y i = 0 r Pi = Y i ˆπ i ˆπi (1 ˆπ i ) e i divided by the standard error of Y i n r 2 P i equals non-replicated Pearson X 2 Studentized Pearson Residual r Pi = Y i ˆπ i ˆπi (1 ˆπ i ) (1 h ii ) e i divided by the SE of e i unit variance h ii is the diagonal element of the hat matrix H = Ŵ 1 2X(X ŴX) 1 X Ŵ 1 2, where Ŵ = diag ( ˆπ i (1 ˆπ i )) 4-30
32 Diagnostics: Residuals Deviance Residuals d i = sign(y i ˆπ i ) 2 [ Y i log ˆπ i + (1 Y i ) log 1 ˆπ ] i Y i 1 Y i = sign(y i ˆπ i ) 2 [Y i log ˆπ i + (1 Y i ) log(1 ˆπ i )] the signed square root of the contribution of Y i to the model deviance Analysis of residuals unknown distribution of residuals under true model plot residual by predicted value. A flat lowess smooth to this plot suggests good model Other summaries as in linear regression DFFITS, DFBETAS χ 2, dev, Cook s distance (see KNNL p. 598) 4-31
33 Graphical Check of the Fit Partition the observations into groups of covariate patterns x i. Haldane (1956) recommended to plot y ˆη i = log i n i y i against covariate patterns x i. The plot should be roughly linear if the model is appropriate for the data When all n i = 1 or all n i are small, one can group the data with nearby x values to make the plot 4-32
34 Overdispersion 4-33
35 Overdispersion Y ind Binomial(n, π), π = exp(x β) 1 + exp(x β) Implies E{Y } = π, V ar{y } = nπ(1 π) Overdispersion: V ar{y } > nπ(1 π) Underdispersion: V ar{y } < nπ(1 π) Mechanisms of overdispersion: Suppose Y 1, Y 2,..., Y n are Bernoulli r.v., E{Y i } = π. define Y = n Y i Can think of at least two situations when Y does not have a Binomial distribution (and therefore a different variance) 4-34
36 Overdispersion from correlation Suppose Y 1, Y 2,..., Y n are not independent Suppose all pairs (Y i, Y j ) have a same correlation ρ ( n V ar(y ) = Cov Y i, = n ) n Y i V ar(y i ) + i j = nπ(1 π) + n(n 1)ρπ(1 π) > nπ(1 π) Corr(Y i, Y j ) V ar(y i ) V ar(y j ) The variance exceeds the variance of the Binomial distribution 4-35
37 Overdispersion from clustered data Suppose Y = n Y i π Binomial(π) Suppose π is a random variable, E{π} = p, V ar{π} = p(1 p) Special case: π Beta(α, β) E{Y } = E{E{Y π}} = p V ar{y } = V ar{e{y π}} + E{V ar{y π}} = V ar{nπ} + E{nπ(1 π)} = n 2 V ar{π} + np n[v ar{π} + p 2 ] = np(1 p) + n(n 1)V ar{π} > np(1 p) The variance exceeds the variance of the Binomial distribution 4-36
38 Modeling Overdispersion Introduce additional parameter E{Y i } = n i π i, V ar{y i } = φn i π i {1 π i } When φ 1, Y i follows a quasi-binomial distribution. The distribution is characterized by its expectation and variance. The probability distribution function is unspecified. ˆφ = χ 2 /(n p) χ 2 is the Pearson lack of fit statistic (= sum of squared Pearson residuals with nonreplicated data) p is the number of parameters in the model ˆφ >> 1 indicates evidence of overdispersion. Since φ does not affect E{Y i }, modeling overdispersion does not change ˆβ. SE{ˆβ} is multiplied by ˆφ. 4-37
39 Comparing Nested Models in Presence of Overdispersion Regular likelihood-based approaches (e.g. LRT, AIC) are not applicable. F test approximates deviance-based LR test F = D reduced D full /ˆφ ass., H 0 df reduced df full F dfreduced df full,df full Assumes roughly equal covariate classes Modeling strategy: fit the full model (with all predictors) estimate ˆφ compare nested models with F test to reduce the number of predictors 4-38
40 Prediction and Classification 4-39
41 Prediction of the Mean Point estimate for the link ˆπ h : ˆπ h = X hb Point estimate for the response ˆπ h : ˆπ h = exp( ˆπ h ) = exp( X h b) Interval estimate for the link ˆπ h : s 2 {ˆπ h } = X h s2 {b}x h (1 α)% CI for ˆπ h : ˆπ h ± z1 α/2 s{ˆπ } = (L, U) Approximate interval estimate for the response ˆπ h : (1 α)% CI for ˆπ h : ( exp( L), exp( U) ) Use Bonferroni if multiple X are of interest 4-40
42 Measures of Agreement Have N observations Consider all pairs of distinct responses In this example t = = 224 Compare predicted probabilities Concordant if ˆπ Y =1 > ˆπ Y =0 Discordant if ˆπ Y =1 < ˆπ Y =0 Tie if ˆπ Y =1 = ˆπ Y =0 Measures of agreement Somers D : (#C - #D)/t Goodman-Kruskal Gamma : (#C - #D)/(#C + #D) Kendall s Tau-a : (#C - #D)/(.5N(N-1)) c : (#C +.5(t-#C-#D))/t 4-41
43 Prediction of a New Observation (i.e. Classification) Choose a cutoff c (0, 1) Ŷ h = { 1, if ˆπh > c 0, if ˆπ h c Sensitivity: # predicted 1 & true 1 # true 1 = n n Ŷ i Y i Y i Specificity: # predicted 0 & true 0 # true 0 = n (1 Ŷ i ) (1 Y i ) n 1 Y i Vary the cut-off c (0, 1), and choose c to optimize sensitivity and specificity 4-42
44 ROC curve for classification Vary c, and plot sensitivity vs 1-specificity. Evaluate models by area under the curve. Sensitivity: # predicted '1' / # true '1' Specificity: # predicted '1' / # true '0' 4-43
45 Evaluation of the Predictive Ability of the Model Area under ROC can be used to compare models Area = 1 perfect classification Area =.5 random classification Classification on the training set is overly optimistic Use cross-validation to construct a more accurate ROC curve 4-44
46 Variable Selection 4-45
47 Automatic Variable Selection Exhaustive search. Minimize: 2 log e L(b) AIC p = 2 log e L(b) + 2p BIC p = 2 log e L(b) + p log e (n) Heuristic search forward selection; backward elimination; stepwise selection based on Wald statistic and Normal distribution 4-46
48 Variable Selection Should be Done as Part of Cross-Validation Example from Simon et al., JNCI, Simulated data with no structure 20 observations with random labels 6,000 possible but unrelated predictors Repeated 200 times Estimated predictive accuracy using no cross-validation selecting features on full dataset, then using cross-validation selecting features at each step of cross-validation 4-47
49 Variable Selection Should be Done as Part of Cross-Validation ature selection in logistic regressi Example from SimonSimon, et al., J. of JNCI, the National Cancer Institute 1.00 Proportion of Simulated Data Sets Cross validation: none (resubstitution method) Cross validation: after gene selection Cross validation: before gene selection No. of Misclassifications imulated 20 samples with random labels + 6,000 gene Conclusion repeated 200 times Incorporating selection of predictors within the cross-validation procedure is key stimated predictive accuracy using no cross-validation; set aside validation set after gene select 4-48 onclusion: cross-validation at all steps of feature selectio
Chapter 14 Logistic Regression, Poisson Regression, and Generalized Linear Models
Chapter 14 Logistic Regression, Poisson Regression, and Generalized Linear Models 許湘伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 10 1 / 29 14.1 Regression Models
More informationChapter 14 Logistic and Poisson Regressions
STAT 525 SPRING 2018 Chapter 14 Logistic and Poisson Regressions Professor Min Zhang Logistic Regression Background In many situations, the response variable has only two possible outcomes Disease (Y =
More informationReview. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 770: Categorical Data Analysis
Review Timothy Hanson Department of Statistics, University of South Carolina Stat 770: Categorical Data Analysis 1 / 22 Chapter 1: background Nominal, ordinal, interval data. Distributions: Poisson, binomial,
More informationSingle-level Models for Binary Responses
Single-level Models for Binary Responses Distribution of Binary Data y i response for individual i (i = 1,..., n), coded 0 or 1 Denote by r the number in the sample with y = 1 Mean and variance E(y) =
More informationHomework 5 - Solution
STAT 526 - Spring 2011 Homework 5 - Solution Olga Vitek Each part of the problems 5 points 1. Agresti 10.1 (a) and (b). Let Patient Die Suicide Yes No sum Yes 1097 90 1187 No 203 435 638 sum 1300 525 1825
More informationˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T.
Exam 3 Review Suppose that X i = x =(x 1,, x k ) T is observed and that Y i X i = x i independent Binomial(n i,π(x i )) for i =1,, N where ˆπ(x) = exp(ˆα + ˆβ T x) 1 + exp(ˆα + ˆβ T x) This is called the
More informationA Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46
A Generalized Linear Model for Binomial Response Data Copyright c 2017 Dan Nettleton (Iowa State University) Statistics 510 1 / 46 Now suppose that instead of a Bernoulli response, we have a binomial response
More informationSTAT 526 Spring Midterm 1. Wednesday February 2, 2011
STAT 526 Spring 2011 Midterm 1 Wednesday February 2, 2011 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points
More informationSections 4.1, 4.2, 4.3
Sections 4.1, 4.2, 4.3 Timothy Hanson Department of Statistics, University of South Carolina Stat 770: Categorical Data Analysis 1/ 32 Chapter 4: Introduction to Generalized Linear Models Generalized linear
More informationCorrelation and regression
1 Correlation and regression Yongjua Laosiritaworn Introductory on Field Epidemiology 6 July 2015, Thailand Data 2 Illustrative data (Doll, 1955) 3 Scatter plot 4 Doll, 1955 5 6 Correlation coefficient,
More information1/15. Over or under dispersion Problem
1/15 Over or under dispersion Problem 2/15 Example 1: dogs and owners data set In the dogs and owners example, we had some concerns about the dependence among the measurements from each individual. Let
More informationLinear Regression Models P8111
Linear Regression Models P8111 Lecture 25 Jeff Goldsmith April 26, 2016 1 of 37 Today s Lecture Logistic regression / GLMs Model framework Interpretation Estimation 2 of 37 Linear regression Course started
More informationSTAT 525 Fall Final exam. Tuesday December 14, 2010
STAT 525 Fall 2010 Final exam Tuesday December 14, 2010 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points will
More information22s:152 Applied Linear Regression. Example: Study on lead levels in children. Ch. 14 (sec. 1) and Ch. 15 (sec. 1 & 4): Logistic Regression
22s:52 Applied Linear Regression Ch. 4 (sec. and Ch. 5 (sec. & 4: Logistic Regression Logistic Regression When the response variable is a binary variable, such as 0 or live or die fail or succeed then
More information9 Generalized Linear Models
9 Generalized Linear Models The Generalized Linear Model (GLM) is a model which has been built to include a wide range of different models you already know, e.g. ANOVA and multiple linear regression models
More informationMSH3 Generalized linear model
Contents MSH3 Generalized linear model 5 Logit Models for Binary Data 173 5.1 The Bernoulli and binomial distributions......... 173 5.1.1 Mean, variance and higher order moments.... 173 5.1.2 Normal limit....................
More informationStatistics 203: Introduction to Regression and Analysis of Variance Course review
Statistics 203: Introduction to Regression and Analysis of Variance Course review Jonathan Taylor - p. 1/?? Today Review / overview of what we learned. - p. 2/?? General themes in regression models Specifying
More informationThe Flight of the Space Shuttle Challenger
The Flight of the Space Shuttle Challenger On January 28, 1986, the space shuttle Challenger took off on the 25 th flight in NASA s space shuttle program. Less than 2 minutes into the flight, the spacecraft
More informationLOGISTIC REGRESSION Joseph M. Hilbe
LOGISTIC REGRESSION Joseph M. Hilbe Arizona State University Logistic regression is the most common method used to model binary response data. When the response is binary, it typically takes the form of
More informationChapter 5: Logistic Regression-I
: Logistic Regression-I Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM [Acknowledgements to Tim Hanson and Haitao Chu] D. Bandyopadhyay
More informationNATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) ST3241 Categorical Data Analysis. (Semester II: )
NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION (SOLUTIONS) Categorical Data Analysis (Semester II: 2010 2011) April/May, 2011 Time Allowed : 2 Hours Matriculation No: Seat No: Grade Table Question 1 2 3
More informationFigure 36: Respiratory infection versus time for the first 49 children.
y BINARY DATA MODELS We devote an entire chapter to binary data since such data are challenging, both in terms of modeling the dependence, and parameter interpretation. We again consider mixed effects
More informationLecture 12: Effect modification, and confounding in logistic regression
Lecture 12: Effect modification, and confounding in logistic regression Ani Manichaikul amanicha@jhsph.edu 4 May 2007 Today Categorical predictor create dummy variables just like for linear regression
More informationClassification. Chapter Introduction. 6.2 The Bayes classifier
Chapter 6 Classification 6.1 Introduction Often encountered in applications is the situation where the response variable Y takes values in a finite set of labels. For example, the response Y could encode
More information1. Hypothesis testing through analysis of deviance. 3. Model & variable selection - stepwise aproaches
Sta 216, Lecture 4 Last Time: Logistic regression example, existence/uniqueness of MLEs Today s Class: 1. Hypothesis testing through analysis of deviance 2. Standard errors & confidence intervals 3. Model
More informationTreatment Variables INTUB duration of endotracheal intubation (hrs) VENTL duration of assisted ventilation (hrs) LOWO2 hours of exposure to 22 49% lev
Variable selection: Suppose for the i-th observational unit (case) you record ( failure Y i = 1 success and explanatory variabales Z 1i Z 2i Z ri Variable (or model) selection: subject matter theory and
More informationIntroduction to Logistic Regression
Introduction to Logistic Regression Problem & Data Overview Primary Research Questions: 1. What are the risk factors associated with CHD? Regression Questions: 1. What is Y? 2. What is X? Did player develop
More informationStandard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j
Standard Errors & Confidence Intervals β β asy N(0, I( β) 1 ), where I( β) = [ 2 l(β, φ; y) ] β i β β= β j We can obtain asymptotic 100(1 α)% confidence intervals for β j using: β j ± Z 1 α/2 se( β j )
More informationUNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator
UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS Duration - 3 hours Aids Allowed: Calculator LAST NAME: FIRST NAME: STUDENT NUMBER: There are 27 pages
More information,..., θ(2),..., θ(n)
Likelihoods for Multivariate Binary Data Log-Linear Model We have 2 n 1 distinct probabilities, but we wish to consider formulations that allow more parsimonious descriptions as a function of covariates.
More informationModel comparison. Patrick Breheny. March 28. Introduction Measures of predictive power Model selection
Model comparison Patrick Breheny March 28 Patrick Breheny BST 760: Advanced Regression 1/25 Wells in Bangladesh In this lecture and the next, we will consider a data set involving modeling the decisions
More informationChapter 1 Statistical Inference
Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations
More informationToday. HW 1: due February 4, pm. Aspects of Design CD Chapter 2. Continue with Chapter 2 of ELM. In the News:
Today HW 1: due February 4, 11.59 pm. Aspects of Design CD Chapter 2 Continue with Chapter 2 of ELM In the News: STA 2201: Applied Statistics II January 14, 2015 1/35 Recap: data on proportions data: y
More informationGeneralized linear models III Log-linear and related models
Generalized linear models III Log-linear and related models Peter McCullagh Department of Statistics University of Chicago Polokwane, South Africa November 2013 Outline Log-linear models Binomial models
More informationBiostat Methods STAT 5500/6500 Handout #12: Methods and Issues in (Binary Response) Logistic Regression
Biostat Methods STAT 5500/6500 Handout #12: Methods and Issues in (Binary Resonse) Logistic Regression Recall general χ 2 test setu: Y 0 1 Trt 0 a b Trt 1 c d I. Basic logistic regression Previously (Handout
More informationChapter 4: Generalized Linear Models-II
: Generalized Linear Models-II Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM [Acknowledgements to Tim Hanson and Haitao Chu] D. Bandyopadhyay
More informationST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples
ST3241 Categorical Data Analysis I Generalized Linear Models Introduction and Some Examples 1 Introduction We have discussed methods for analyzing associations in two-way and three-way tables. Now we will
More informationCategorical data analysis Chapter 5
Categorical data analysis Chapter 5 Interpreting parameters in logistic regression The sign of β determines whether π(x) is increasing or decreasing as x increases. The rate of climb or descent increases
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationLOGISTIC REGRESSION. Lalmohan Bhar Indian Agricultural Statistics Research Institute, New Delhi
LOGISTIC REGRESSION Lalmohan Bhar Indian Agricultural Statistics Research Institute, New Delhi- lmbhar@gmail.com. Introduction Regression analysis is a method for investigating functional relationships
More informationSection IX. Introduction to Logistic Regression for binary outcomes. Poisson regression
Section IX Introduction to Logistic Regression for binary outcomes Poisson regression 0 Sec 9 - Logistic regression In linear regression, we studied models where Y is a continuous variable. What about
More information12 Modelling Binomial Response Data
c 2005, Anthony C. Brooms Statistical Modelling and Data Analysis 12 Modelling Binomial Response Data 12.1 Examples of Binary Response Data Binary response data arise when an observation on an individual
More informationOutline of GLMs. Definitions
Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density
More informationStat 579: Generalized Linear Models and Extensions
Stat 579: Generalized Linear Models and Extensions Yan Lu Jan, 2018, week 3 1 / 67 Hypothesis tests Likelihood ratio tests Wald tests Score tests 2 / 67 Generalized Likelihood ratio tests Let Y = (Y 1,
More informationLogistic regression. 11 Nov Logistic regression (EPFL) Applied Statistics 11 Nov / 20
Logistic regression 11 Nov 2010 Logistic regression (EPFL) Applied Statistics 11 Nov 2010 1 / 20 Modeling overview Want to capture important features of the relationship between a (set of) variable(s)
More informationCHAPTER 1: BINARY LOGIT MODEL
CHAPTER 1: BINARY LOGIT MODEL Prof. Alan Wan 1 / 44 Table of contents 1. Introduction 1.1 Dichotomous dependent variables 1.2 Problems with OLS 3.3.1 SAS codes and basic outputs 3.3.2 Wald test for individual
More informationStatistical Modelling with Stata: Binary Outcomes
Statistical Modelling with Stata: Binary Outcomes Mark Lunt Arthritis Research UK Epidemiology Unit University of Manchester 21/11/2017 Cross-tabulation Exposed Unexposed Total Cases a b a + b Controls
More informationLattice Data. Tonglin Zhang. Spatial Statistics for Point and Lattice Data (Part III)
Title: Spatial Statistics for Point Processes and Lattice Data (Part III) Lattice Data Tonglin Zhang Outline Description Research Problems Global Clustering and Local Clusters Permutation Test Spatial
More informationBiostat Methods STAT 5820/6910 Handout #5a: Misc. Issues in Logistic Regression
Biostat Methods STAT 5820/6910 Handout #5a: Misc. Issues in Logistic Regression Recall general χ 2 test setu: Y 0 1 Trt 0 a b Trt 1 c d I. Basic logistic regression Previously (Handout 4a): χ 2 test of
More informationChapter 4: Generalized Linear Models-I
: Generalized Linear Models-I Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM [Acknowledgements to Tim Hanson and Haitao Chu] D. Bandyopadhyay
More informationModel Selection in GLMs. (should be able to implement frequentist GLM analyses!) Today: standard frequentist methods for model selection
Model Selection in GLMs Last class: estimability/identifiability, analysis of deviance, standard errors & confidence intervals (should be able to implement frequentist GLM analyses!) Today: standard frequentist
More informationCategorical Variables and Contingency Tables: Description and Inference
Categorical Variables and Contingency Tables: Description and Inference STAT 526 Professor Olga Vitek March 3, 2011 Reading: Agresti Ch. 1, 2 and 3 Faraway Ch. 4 3 Univariate Binomial and Multinomial Measurements
More informationHomework 1 Solutions
36-720 Homework 1 Solutions Problem 3.4 (a) X 2 79.43 and G 2 90.33. We should compare each to a χ 2 distribution with (2 1)(3 1) 2 degrees of freedom. For each, the p-value is so small that S-plus reports
More informationBusiness Statistics. Lecture 10: Correlation and Linear Regression
Business Statistics Lecture 10: Correlation and Linear Regression Scatterplot A scatterplot shows the relationship between two quantitative variables measured on the same individuals. It displays the Form
More informationAdministration. Homework 1 on web page, due Feb 11 NSERC summer undergraduate award applications due Feb 5 Some helpful books
STA 44/04 Jan 6, 00 / 5 Administration Homework on web page, due Feb NSERC summer undergraduate award applications due Feb 5 Some helpful books STA 44/04 Jan 6, 00... administration / 5 STA 44/04 Jan 6,
More informationClass Notes: Week 8. Probit versus Logit Link Functions and Count Data
Ronald Heck Class Notes: Week 8 1 Class Notes: Week 8 Probit versus Logit Link Functions and Count Data This week we ll take up a couple of issues. The first is working with a probit link function. While
More informationFinal Review. Yang Feng. Yang Feng (Columbia University) Final Review 1 / 58
Final Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Final Review 1 / 58 Outline 1 Multiple Linear Regression (Estimation, Inference) 2 Special Topics for Multiple
More informationLikelihoods for Generalized Linear Models
1 Likelihoods for Generalized Linear Models 1.1 Some General Theory We assume that Y i has the p.d.f. that is a member of the exponential family. That is, f(y i ; θ i, φ) = exp{(y i θ i b(θ i ))/a i (φ)
More informationGeneralized linear models for binary data. A better graphical exploratory data analysis. The simple linear logistic regression model
Stat 3302 (Spring 2017) Peter F. Craigmile Simple linear logistic regression (part 1) [Dobson and Barnett, 2008, Sections 7.1 7.3] Generalized linear models for binary data Beetles dose-response example
More informationIntroduction to the Logistic Regression Model
CHAPTER 1 Introduction to the Logistic Regression Model 1.1 INTRODUCTION Regression methods have become an integral component of any data analysis concerned with describing the relationship between a response
More informationRegression and Multivariate Data Analysis Summary
Regression and Multivariate Data Analysis Summary Rebecca Sela February 27, 2006 In regression, one models a response variable (y) using predictor variables (x 1,...x n ). Such a model can be used for
More informationAnalysis of Categorical Data. Nick Jackson University of Southern California Department of Psychology 10/11/2013
Analysis of Categorical Data Nick Jackson University of Southern California Department of Psychology 10/11/2013 1 Overview Data Types Contingency Tables Logit Models Binomial Ordinal Nominal 2 Things not
More informationLecture 14: Introduction to Poisson Regression
Lecture 14: Introduction to Poisson Regression Ani Manichaikul amanicha@jhsph.edu 8 May 2007 1 / 52 Overview Modelling counts Contingency tables Poisson regression models 2 / 52 Modelling counts I Why
More informationModelling counts. Lecture 14: Introduction to Poisson Regression. Overview
Modelling counts I Lecture 14: Introduction to Poisson Regression Ani Manichaikul amanicha@jhsph.edu Why count data? Number of traffic accidents per day Mortality counts in a given neighborhood, per week
More information8 Nominal and Ordinal Logistic Regression
8 Nominal and Ordinal Logistic Regression 8.1 Introduction If the response variable is categorical, with more then two categories, then there are two options for generalized linear models. One relies on
More informationBinary Regression. GH Chapter 5, ISL Chapter 4. January 31, 2017
Binary Regression GH Chapter 5, ISL Chapter 4 January 31, 2017 Seedling Survival Tropical rain forests have up to 300 species of trees per hectare, which leads to difficulties when studying processes which
More informationGauge Plots. Gauge Plots JAPANESE BEETLE DATA MAXIMUM LIKELIHOOD FOR SPATIALLY CORRELATED DISCRETE DATA JAPANESE BEETLE DATA
JAPANESE BEETLE DATA 6 MAXIMUM LIKELIHOOD FOR SPATIALLY CORRELATED DISCRETE DATA Gauge Plots TuscaroraLisa Central Madsen Fairways, 996 January 9, 7 Grubs Adult Activity Grub Counts 6 8 Organic Matter
More informationStat/F&W Ecol/Hort 572 Review Points Ané, Spring 2010
1 Linear models Y = Xβ + ɛ with ɛ N (0, σ 2 e) or Y N (Xβ, σ 2 e) where the model matrix X contains the information on predictors and β includes all coefficients (intercept, slope(s) etc.). 1. Number of
More informationSTAC51: Categorical data Analysis
STAC51: Categorical data Analysis Mahinda Samarakoon April 6, 2016 Mahinda Samarakoon STAC51: Categorical data Analysis 1 / 25 Table of contents 1 Building and applying logistic regression models (Chap
More informationSCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models
SCHOOL OF MATHEMATICS AND STATISTICS Linear and Generalised Linear Models Autumn Semester 2017 18 2 hours Attempt all the questions. The allocation of marks is shown in brackets. RESTRICTED OPEN BOOK EXAMINATION
More informationGeneralized Linear Models
Generalized Linear Models Lecture 3. Hypothesis testing. Goodness of Fit. Model diagnostics GLM (Spring, 2018) Lecture 3 1 / 34 Models Let M(X r ) be a model with design matrix X r (with r columns) r n
More informationSection Poisson Regression
Section 14.13 Poisson Regression Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 26 Poisson regression Regular regression data {(x i, Y i )} n i=1,
More informationPoisson regression 1/15
Poisson regression 1/15 2/15 Counts data Examples of counts data: Number of hospitalizations over a period of time Number of passengers in a bus station Blood cells number in a blood sample Number of typos
More informationGeneralized linear models
Generalized linear models Douglas Bates November 01, 2010 Contents 1 Definition 1 2 Links 2 3 Estimating parameters 5 4 Example 6 5 Model building 8 6 Conclusions 8 7 Summary 9 1 Generalized Linear Models
More information11. Generalized Linear Models: An Introduction
Sociology 740 John Fox Lecture Notes 11. Generalized Linear Models: An Introduction Copyright 2014 by John Fox Generalized Linear Models: An Introduction 1 1. Introduction I A synthesis due to Nelder and
More informationGeneralized Linear Models: An Introduction
Applied Statistics With R Generalized Linear Models: An Introduction John Fox WU Wien May/June 2006 2006 by John Fox Generalized Linear Models: An Introduction 1 A synthesis due to Nelder and Wedderburn,
More information7. Assumes that there is little or no multicollinearity (however, SPSS will not assess this in the [binary] Logistic Regression procedure).
1 Neuendorf Logistic Regression The Model: Y Assumptions: 1. Metric (interval/ratio) data for 2+ IVs, and dichotomous (binomial; 2-value), categorical/nominal data for a single DV... bear in mind that
More informationThe material for categorical data follows Agresti closely.
Exam 2 is Wednesday March 8 4 sheets of notes The material for categorical data follows Agresti closely A categorical variable is one for which the measurement scale consists of a set of categories Categorical
More informationDiscrete Multivariate Statistics
Discrete Multivariate Statistics Univariate Discrete Random variables Let X be a discrete random variable which, in this module, will be assumed to take a finite number of t different values which are
More informationLecture 3.1 Basic Logistic LDA
y Lecture.1 Basic Logistic LDA 0.2.4.6.8 1 Outline Quick Refresher on Ordinary Logistic Regression and Stata Women s employment example Cross-Over Trial LDA Example -100-50 0 50 100 -- Longitudinal Data
More informationParametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1
Parametric Modelling of Over-dispersed Count Data Part III / MMath (Applied Statistics) 1 Introduction Poisson regression is the de facto approach for handling count data What happens then when Poisson
More informationRegression Model Building
Regression Model Building Setting: Possibly a large set of predictor variables (including interactions). Goal: Fit a parsimonious model that explains variation in Y with a small set of predictors Automated
More informationLecture 13: More on Binary Data
Lecture 1: More on Binary Data Link functions for Binomial models Link η = g(π) π = g 1 (η) identity π η logarithmic log π e η logistic log ( π 1 π probit Φ 1 (π) Φ(η) log-log log( log π) exp( e η ) complementary
More informationHomework 5: Answer Key. Plausible Model: E(y) = µt. The expected number of arrests arrests equals a constant times the number who attend the game.
EdPsych/Psych/Soc 589 C.J. Anderson Homework 5: Answer Key 1. Probelm 3.18 (page 96 of Agresti). (a) Y assume Poisson random variable. Plausible Model: E(y) = µt. The expected number of arrests arrests
More informationMultinomial Logistic Regression Models
Stat 544, Lecture 19 1 Multinomial Logistic Regression Models Polytomous responses. Logistic regression can be extended to handle responses that are polytomous, i.e. taking r>2 categories. (Note: The word
More informationLecture 5: LDA and Logistic Regression
Lecture 5: and Logistic Regression Hao Helen Zhang Hao Helen Zhang Lecture 5: and Logistic Regression 1 / 39 Outline Linear Classification Methods Two Popular Linear Models for Classification Linear Discriminant
More informationSTAT 526 Spring Final Exam. Thursday May 5, 2011
STAT 526 Spring 2011 Final Exam Thursday May 5, 2011 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points will
More informationAssociation studies and regression
Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration
More informationGeneralized Linear Models
York SPIDA John Fox Notes Generalized Linear Models Copyright 2010 by John Fox Generalized Linear Models 1 1. Topics I The structure of generalized linear models I Poisson and other generalized linear
More informationLogistic Regression. Advanced Methods for Data Analysis (36-402/36-608) Spring 2014
Logistic Regression Advanced Methods for Data Analysis (36-402/36-608 Spring 204 Classification. Introduction to classification Classification, like regression, is a predictive task, but one in which the
More informationGeneralized Linear Modeling - Logistic Regression
1 Generalized Linear Modeling - Logistic Regression Binary outcomes The logit and inverse logit interpreting coefficients and odds ratios Maximum likelihood estimation Problem of separation Evaluating
More informationIntroduction to Logistic Regression
Misclassification 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.0 0.2 0.4 0.6 0.8 1.0 Cutoff Introduction to Logistic Regression Problem & Data Overview Primary Research Questions: 1. What skills are important
More informationSTAT5044: Regression and Anova
STAT5044: Regression and Anova Inyoung Kim 1 / 18 Outline 1 Logistic regression for Binary data 2 Poisson regression for Count data 2 / 18 GLM Let Y denote a binary response variable. Each observation
More informationSTA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis. 1. Indicate whether each of the following is true (T) or false (F).
STA 4504/5503 Sample Exam 1 Spring 2011 Categorical Data Analysis 1. Indicate whether each of the following is true (T) or false (F). (a) T In 2 2 tables, statistical independence is equivalent to a population
More informationLecture 2: Poisson and logistic regression
Dankmar Böhning Southampton Statistical Sciences Research Institute University of Southampton, UK S 3 RI, 11-12 December 2014 introduction to Poisson regression application to the BELCAP study introduction
More informationLogistic Regression. James H. Steiger. Department of Psychology and Human Development Vanderbilt University
Logistic Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) Logistic Regression 1 / 38 Logistic Regression 1 Introduction
More informationSTAT 7030: Categorical Data Analysis
STAT 7030: Categorical Data Analysis 5. Logistic Regression Peng Zeng Department of Mathematics and Statistics Auburn University Fall 2012 Peng Zeng (Auburn University) STAT 7030 Lecture Notes Fall 2012
More informationHigh-dimensional regression modeling
High-dimensional regression modeling David Causeur Department of Statistics and Computer Science Agrocampus Ouest IRMAR CNRS UMR 6625 http://www.agrocampus-ouest.fr/math/causeur/ Course objectives Making
More informationGeneralized Linear Models I
Statistics 203: Introduction to Regression and Analysis of Variance Generalized Linear Models I Jonathan Taylor - p. 1/16 Today s class Poisson regression. Residuals for diagnostics. Exponential families.
More informationBayesian Multivariate Logistic Regression
Bayesian Multivariate Logistic Regression Sean M. O Brien and David B. Dunson Biostatistics Branch National Institute of Environmental Health Sciences Research Triangle Park, NC 1 Goals Brief review of
More information