Regression With a Categorical Independent Variable

Similar documents
Regression With a Categorical Independent Variable

Regression With a Categorical Independent Variable

Regression With a Categorical Independent Variable: Mean Comparisons

Categorical Predictor Variables

Applied Regression Analysis

Chapter 8: Regression Models with Qualitative Predictors

Variance Partitioning

Variance Estimates and the F Ratio. ERSH 8310 Lecture 3 September 2, 2009

Variance Partitioning

Profile Analysis Multivariate Regression

Topic 17 - Single Factor Analysis of Variance. Outline. One-way ANOVA. The Data / Notation. One way ANOVA Cell means model Factor effects model

Chapter 4: Regression Models

Chapter 4. Regression Models. Learning Objectives

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

1. (Rao example 11.15) A study measures oxygen demand (y) (on a log scale) and five explanatory variables (see below). Data are available as

22s:152 Applied Linear Regression. Take random samples from each of m populations.

22s:152 Applied Linear Regression. There are a couple commonly used models for a one-way ANOVA with m groups. Chapter 8: ANOVA

Difference in two or more average scores in different groups

BNAD 276 Lecture 10 Simple Linear Regression Model

ST430 Exam 2 Solutions

ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4)

General Principles Within-Cases Factors Only Within and Between. Within Cases ANOVA. Part One

Analytical Comparisons Among Treatment Means (Chapter 4) Analysis of Trend (Chapter 5) ERSH 8310 Fall 2009

The One-Way Repeated-Measures ANOVA. (For Within-Subjects Designs)

Review. One-way ANOVA, I. What s coming up. Multiple comparisons

Interactions among Continuous Predictors

(ii) Scan your answer sheets INTO ONE FILE only, and submit it in the drop-box.

Multiple t Tests. Introduction to Analysis of Variance. Experiments with More than 2 Conditions

Linear Regression. In this lecture we will study a particular type of regression model: the linear regression model

Regression Models. Chapter 4. Introduction. Introduction. Introduction

Topic 28: Unequal Replication in Two-Way ANOVA

Review of the General Linear Model

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

STAT 705 Chapter 16: One-way ANOVA

WELCOME! Lecture 13 Thommy Perlinger

STAT 135 Lab 11 Tests for Categorical Data (Fisher s Exact test, χ 2 tests for Homogeneity and Independence) and Linear Regression

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model

Regression Analysis with Categorical Variables

FAQ: Linear and Multiple Regression Analysis: Coefficients

Introduction To Logistic Regression

Chapter 3 Multiple Regression Complete Example

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES

Comparing Several Means: ANOVA

Regression Analysis. BUS 735: Business Decision Making and Research

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

Lecture 5: Clustering, Linear Regression

Introduction to Matrix Algebra and the Multivariate Normal Distribution

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model

STA441: Spring Multiple Regression. This slide show is a free open source document. See the last slide for copyright information.

STATISTICS FOR ECONOMISTS: A BEGINNING. John E. Floyd University of Toronto

Simple, Marginal, and Interaction Effects in General Linear Models: Part 1

psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Lecture 6: Linear Regression

Lecture 5: Clustering, Linear Regression

One-Way Analysis of Variance. With regression, we related two quantitative, typically continuous variables.

Advanced Regression Topics: Violation of Assumptions

Multiple Linear Regression

Lecture 5: Clustering, Linear Regression

Lecture 9: Linear Regression

Ron Heck, Fall Week 3: Notes Building a Two-Level Model

Analysis of Variance

Chapter 1 Statistical Inference

Deciphering Math Notation. Billy Skorupski Associate Professor, School of Education

Statistical Distribution Assumptions of General Linear Models

Simple Linear Regression

Regression Analysis. BUS 735: Business Decision Making and Research. Learn how to detect relationships between ordinal and categorical variables.

Exam Applied Statistical Regression. Good Luck!

1 Correlation and Inference from Regression

SIMPLE REGRESSION ANALYSIS. Business Statistics

OHSU OGI Class ECE-580-DOE :Design of Experiments Steve Brainerd

Dr. Junchao Xia Center of Biophysics and Computational Biology. Fall /1/2016 1/46

Using SPSS for One Way Analysis of Variance

Introduction to the Analysis of Variance (ANOVA)

Topic 1. Definitions

Multivariate Regression (Chapter 10)

Daniel Boduszek University of Huddersfield

Multiple linear regression S6

Simple, Marginal, and Interaction Effects in General Linear Models

Introduction to Regression

Neuendorf MANOVA /MANCOVA. Model: X1 (Factor A) X2 (Factor B) X1 x X2 (Interaction) Y4. Like ANOVA/ANCOVA:

Interactions. Interactions. Lectures 1 & 2. Linear Relationships. y = a + bx. Slope. Intercept

Topic 20: Single Factor Analysis of Variance

Chapter 3 ANALYSIS OF RESPONSE PROFILES

An Introduction to Mplus and Path Analysis

Vectors and Matrices Statistics with Vectors and Matrices

14 Multiple Linear Regression

Statistical Techniques II EXST7015 Simple Linear Regression

One-way ANOVA. Experimental Design. One-way ANOVA

Module 2. General Linear Model

Linear Modelling in Stata Session 6: Further Topics in Linear Modelling

ECON 497 Midterm Spring

Statistical methods for comparing multiple groups. Lecture 7: ANOVA. ANOVA: Definition. ANOVA: Concepts

Statistical Foundations:

ANCOVA. Lecture 9 Andrew Ainsworth

Analysis of Variance (ANOVA)

WISE International Masters

Final Exam - Solutions

Simple Linear Regression for the Climate Data

Transcription:

Regression With a Independent Variable Lecture 10 November 5, 2008 ERSH 8320 Lecture #10-11/5/2008 Slide 1 of 54

Today s Lecture Today s Lecture Chapter 11: Regression with a single categorical independent variable. Coding procedures for analysis. Dummy coding. Relationship between categorical independent variable regression and other statistical terms. Lecture #10-11/5/2008 Slide 2 of 54

Regression with Continuous Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis Linear regression regresses a continuous-valued dependent variable, Y, onto a set of continuous-valued independent variables X. The regression line gives the estimate of the mean of Y conditional on the values of X, or E(Y X). But what happens when some or all independent variables are categorical in nature? Is the point of the regression to determine E(Y X), across the levels of Y? Can t we just put the categorical variables into SPSS and push the Continue" button? Lecture #10-11/5/2008 Slide 3 of 54

Example Data Set Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis Neter (1996, p. 676). The Kenton Food Company wished to test four different package designs for a new breakfast cereal. Twenty stores, with approximately equal sales volumes, were selected as the experimental units. Each store was randomly assigned one of the package designs, with each package design assigned to five stores. The stores were chosen to be comparable in location and sales volume. Other relevant conditions that could affect sales, such as price, amount and location of shelf space, and special promotional efforts, were kept the same for all of the stores in the experiment. Lecture #10-11/5/2008 Slide 4 of 54

A Regular Regression? W Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis Number of Cases Sold 30.00 25.00 20.00 15.00 10.00 Number of Cases Sold = 7.70 + 4.38 * package R Square = 0.64 W W W W W W W W W W W W W W W W W W W 1.00 2.00 3.00 4.00 Package Type What is wrong with this picture? Lecture #10-11/5/2008 Slide 5 of 54

Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis variables commonly occur in research settings. Another term sometimes used to describe for categorial variables is that of qualitative variables. A strict definition of a qualitative or categorical variable is that of a variable that has a finite number of levels. Continuous (or quantitative) variables, alternatively, have infinitely many levels. Often this is assumed more than practiced. Quantitative variables often have countably many levels. Level of precision of an instrument can limit the number of levels of a quantitative variable. Lecture #10-11/5/2008 Slide 6 of 54

Research Design Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis variables can occur in many different research designs: Experimental research. Quasi-experimental research. Nonexperimental/Observational research. Such variables can be used with regression for: Prediction. Explanation. Lecture #10-11/5/2008 Slide 7 of 54

Analysis Specifics Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis Because of nature of categorical variables, emphasis of regression is not on linear trends but on differences between means (of Y ) at each level of the category. Not all categorical variables are ordered (like cereal box type, gender,etc...). When considering differences in the mean of the dependent variable, the type of analysis being conducted by a regression is commonly called an ANalysis Of VAriance (ANOVA). Combinations of categorical and continuous variables in the same regression is called ANalysis Of CoVAriance (ANCOVA - Chapters 14 and 15). Lecture #10-11/5/2008 Slide 8 of 54

Example Variable: Two Categories Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis From Pedhazur (1997; p. 343): Assume that the data reported [below] were obtained in an experiment in which E represents an experimental group and C represents a control group. E C 20 10 18 12 17 11 17 15 Y 13 17 85 65 Ȳ 17 13 (Y Ȳ ) 2 = y 2 26 34 Lecture #10-11/5/2008 Slide 9 of 54

Old School Statistics: The t-test Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis As you may recall from an earlier course on statistics, an easy way to determine if the means of the two conditions differ significantly is to use a t-test (with n 1 + n 2 2) degrees of freedom. H 0 µ 1 = µ 2 H A µ 1 µ 2 t = Σy 2 1 +Σy2 2 n 1 +n 2 2 Ȳ 1 Ȳ 2 ( ) 1 n 1 + 1 n 2 Lecture #10-11/5/2008 Slide 10 of 54

Old School Statistics: The t-test t = 17 13 ( 1 5 + 1 5 26+34 5+5 2 ) = 4 3 = 2.31 Regression Basics Not A Good Idea Research Design Analysis Specs Example Example Analysis From Excel ( =tdist(2.31,8,2) ), p = 0.0496. If we used a Type-I error rate of 0.05, we would reject the null hypothesis, and conclude the means of the two groups were significantly different. But what if we had more than two groups?. This type of problem can be solved equivalently from within the context of the General Linear Model. Lecture #10-11/5/2008 Slide 11 of 54

When using categorical variables in regression, levels of the categories must be recoded from their original value to ensure the regression model truly estimates the mean differences at levels of the categories. Several types of coding strategies are common: Dummy coding. Effect coding. Each type will produce the same fit of the model (via R 2 ). The estimated regression parameters are different across coding types, thereby representing the true difference in approaches employed by each type of coding. The choice of method of coding does not differ as a function of the type of research or analysis or purpose (explanation or prediction) of the analysis. Lecture #10-11/5/2008 Slide 12 of 54

Definition: a code is a set of symbols to which meanings can be assigned (Pedhazur, 1997; p. 342). The assignment of symbols follows a rule (or set of rules) determined by the categories of the variable used. Typically symbols represent the respective levels of a categorical variable. All entities within the same symbol are considered alike (or homogeneous) within that category level. levels must be predetermined prior to analysis. Some variables are obviously categorical - gender. Some variables are not so obviously categorial - political affiliation. Lecture #10-11/5/2008 Slide 13 of 54

The most straight-forward method of coding categorical variables is dummy coding. Example: Dummy Coded Example 1 Example 2 Example 3 In dummy coding, one creates a set of variables that represent the membership of an observation to a given category level. If an observation is a member of a specific category level, they are given a value of 1 in that category level s variable. If an observation is not a member of a specific category, they are given a value of 0 in that category level s variable. Lecture #10-11/5/2008 Slide 14 of 54

For each observation, a no more that a single 1 will appear in the set of columns for that variable. Example: Dummy Coded Example 1 Example 2 Example 3 The columns represent the predictor variables in a regression analysis, where the dependent variable is modeled as a function of these columns. Because of linear dependence with an intercept, one category-level column is often excluded from the analysis. Because all observations at a given category level have the same value across the set of predictors, the predicted value of the dependent variable, Y, will be identical for all observations within a category. The set of category columns (and a vector for an intercept) are now used as input into a regression model. Lecture #10-11/5/2008 Slide 15 of 54

Dummy Coded Regression Example Example: Dummy Coded Example 1 Example 2 Example 3 Y X 1 X 2 X 3 Group 20 1 1 0 E 18 1 1 0 E 17 1 1 0 E 17 1 1 0 E 13 1 1 0 E 10 1 0 1 C 12 1 0 1 C 11 1 0 1 C 15 1 0 1 C 17 1 0 1 C Mean 15 1 0.5 0.5 SS 100 0 2.5 2.5 yx2 = 10 yx3 = 10 Lecture #10-11/5/2008 Slide 16 of 54

Dummy Coded Regression The General Linear Model states that the estimated regression parameters are given by: b = (X X) 1 X y Example: Dummy Coded Example 1 Example 2 Example 3 From the previous slide, you can see what our entries for X could be, but... Notice that X 1 = X 2 + X 3. This linear dependency means that: (X X) is a singular matrix - no inverse exists. Any combination of two of the columns would rid us of the linear dependency. Lecture #10-11/5/2008 Slide 17 of 54

Dummy Coded Regression - X 2 and X 3 For our first example analysis, consider the regression of Y on X 2 and X 3. Y = b 2 X 2 + b 3 X 3 + e Example: Dummy Coded Example 1 Example 2 Example 3 b 2 = 17 b 3 = 13 y2 = 100 SS res = X X = 60 SS reg = 100 60 = 40 R 2 = 40 100 = 0.4 Lecture #10-11/5/2008 Slide 18 of 54

Dummy Coded Regression - X 2 and X 3 Example: Dummy Coded Example 1 Example 2 Example 3 b 2 = 17 is the mean for the E category. b 3 = 13 is the mean for the C category. Without an intercept, the model is fairly easy to interpret. For more advanced models, an intercept will prove to be helpful in interpretation. Lecture #10-11/5/2008 Slide 19 of 54

Dummy Coded Regression - X 1 and X 2 For our second example analysis, consider the regression of Y on X 1 and X 2. Y = a + b 2 X 2 + e Example: Dummy Coded Example 1 Example 2 Example 3 a = 13 b 2 = 4 y2 = 100 SS res = X X = 60 SS reg = 100 60 = 40 R 2 = 40 100 = 0.4 Lecture #10-11/5/2008 Slide 20 of 54

Dummy Coded Regression - X 2 and X 3 a = 13 is the mean for the C category. Example: Dummy Coded Example 1 Example 2 Example 3 b 2 = 4 is the mean difference between the E category and the C category. The C category is called reference category. For members of the C category: Y = a + b 2 X 2 = 13 + 4(0) = 13 For members of the E category: Y = a + b 2 X 2 = 13 + 4(1) = 17 With the intercept, the model parameters are now different from the first example. The fit of the model, however, is the same. Lecture #10-11/5/2008 Slide 21 of 54

Dummy Coded Regression - X 1 and X 3 For our third example analysis, consider the regression of Y on X 1 and X 3. Y = a + b 3 X 3 + e Example: Dummy Coded Example 1 Example 2 Example 3 a = 17 b 3 = 4 y2 = 100 SS res = X X = 60 SS reg = 100 60 = 40 R 2 = 40 100 = 0.4 Lecture #10-11/5/2008 Slide 22 of 54

Dummy Coded Regression - X 1 and X 3 a = 17 is the mean for the E category. Example: Dummy Coded Example 1 Example 2 Example 3 b 3 = 4 is the mean difference between the C category and the E category. The E category is called reference category. For members of the E category: Y = a + b 3 X 3 = 17 4(0) = 17 For members of the E category: Y = a + b 3 X 3 = 17 4(1) = 13 With the intercept, the model parameters are now different from the first example. The fit of the model, however, is the same. Lecture #10-11/5/2008 Slide 23 of 54

Hypothesis Test of the Regression Coefficien Because each model had the same value for R 2 and the same number of degrees of freedom for the regression (1), all hypothesis tests of the model parameters will result in the same value of the test statistic. Example: Dummy Coded Example 1 Example 2 Example 3 F = R 2 /k (1 R 2 )/(N k 1) = 0.4/1 (1 0.4)/(10 1 1) = 5.33 From Excel ( =fdist(5.33,1,8) ), p = 0.0496. If we used a Type-I error rate of 0.05, we would reject the null hypothesis, and conclude the regression coefficient for each analysis would be significantly different from zero. Lecture #10-11/5/2008 Slide 24 of 54

Hypothesis Test of the Regression Coefficien Example: Dummy Coded Example 1 Example 2 Example 3 Recall from the t-test of the mean difference, t = 2.321 For the test of the coefficient, notice that F = t 2. Also notice that the p-values for each hypothesis test were the same, p = 0.0496. The test of the regression coefficient is equivalent to running a t-test when using a single categorical variable with two categories. Lecture #10-11/5/2008 Slide 25 of 54

Generalizing the concept of dummy coding, we revisit our first example data set, the cereal experiment data. Recall that there were four different types of cereal boxes. Breakfast Cereal Example A dummy coding scheme would involve creation of four new column vectors, each representing observations from each box type. Just as was the case with two categories, a linear dependency is created if we wanted to use all four variables. Therefore, we must choose which category to remove from the analysis. Lecture #10-11/5/2008 Slide 26 of 54

One-Way Analysis of Variance Just as was the case for the example with two categories, a multiple category regression model with a single categorical independent variable has a direct link to a statistical test you may be familiar with. Breakfast Cereal Example The regression model tests for mean differences across all pairings of category levels simultaneously. Testing for a difference between multiple groups equates to a one-way ANOVA model (for a model with a single categorical independent variable). Lecture #10-11/5/2008 Slide 27 of 54

Y X 1 X 2 X 3 X 4 X 5 Type 11 1 1 0 0 0 1 17 1 1 0 0 0 1 16 1 1 0 0 0 1 14 1 1 0 0 0 1 15 1 1 0 0 0 1 12 1 0 1 0 0 2 10 1 0 1 0 0 2 15 1 0 1 0 0 2 19 1 0 1 0 0 2 11 1 0 1 0 0 2 23 1 0 0 1 0 3 20 1 0 0 1 0 3 18 1 0 0 1 0 3 17 1 0 0 1 0 3 19 1 0 0 1 0 3 27 1 0 0 0 1 4 33 1 0 0 0 1 4 22 1 0 0 0 1 4 26 1 0 0 0 1 4 28 1 0 0 0 1 4 27-1

Breakfast Cereal Example Breakfast Cereal Example To make things interesting, let s drop X 5 from our analysis. Y = a + b 2 X 2 + b 3 X 3 + b 4 X 4 + e Because X 5 (representing box type four) was omitted from our model, the estimated intercept parameter now represents the mean for group X 5. All other parameters represent the difference between their respective category level and category level four with respect to the dependent variable. a = 27.2 b 2 = 12.6 b 3 = 13.8 b 4 = 7.8 Lecture #10-11/5/2008 Slide 28 of 54

Breakfast Cereal Example Therefore: Ȳ A = Y A = a + b 2 (1) + b 3 (0) + b 4 (0) = 27.2 12.6 = 14.6 Breakfast Cereal Example Ȳ B = Y B = a + b 2 (0) + b 3 (1) + b 4 (0) = 27.2 13.8 = 13.4 Ȳ C = Y C = a + b 2 (0) + b 3 (0) + b 4 (1) = 27.2 7.8 = 19.4 Ȳ D = Y D = a + b 2 (0) + b 3 (0) + b 4 (0) = 27.2 Lecture #10-11/5/2008 Slide 29 of 54

Hypothesis Test Breakfast Cereal Example To test that all means are equal to each other (H 0 : µ 1 = µ 2 =... = µ k ) against the hypothesis that at least one mean differs (H 1 : At least one µ µ ), called an omnibus test, the same hypothesis test from before can be used: F = R 2 /k (1 R 2 )/(N k 1) = 0.4/1 (1 0.4)/(10 1 1) = 5.33 y 2 = 1013.0 SS res = 158.4 SS reg = 1013.0 158.4 = 854.6 R 2 = 854.6/1013.0 = 0.844 Lecture #10-11/5/2008 Slide 30 of 54

Hypothesis Tests F = R 2 /k (1 R 2 )/(N k 1) = 0.844/3 (1 0.844)/(20 3 1) = 28.77 Breakfast Cereal Example From Excel ( =fdist(28.77,3,16) ), p = 0.000001. If we used a Type-I error rate of 0.05, we would reject the null hypothesis, and conclude that at least one regression coefficient for this analysis would be significantly different from zero. Having a regression coefficient of zero means having zero difference between two means (reference and specific category being compared). Having all regression coefficients of zero means absolutely no difference between any of the means. Lecture #10-11/5/2008 Slide 31 of 54

Effect coding is the less straight-forward method of coding categorical variables when compared with dummy coding. In effect coding, one (again) creates a set of columns that represent the membership of an observation to a given category level. Like dummy coding, the total number of columns for a categorical variable are equal to one less than the total number of category levels. Example: Effect Coded Example 1 Fixed Effects Linear Model Lecture #10-11/5/2008 Slide 32 of 54

If an observation is a member of a specific category level, they are given a value of 1 in that category level s column. If an observation is not a member of a specific category and is not a member of the omitted category, they are given a value of 0 in that category level s column. If an observation is a member of the omitted category, they are given a value of -1 in every category level s column. Example: Effect Coded Example 1 Fixed Effects Linear Model Lecture #10-11/5/2008 Slide 33 of 54

Example: Effect Coded Example 1 Fixed Effects Linear Model For each observation, a no more that a single 1 will appear in the set of columns for that variable. The columns represent the predictor variables in a regression analysis, where the dependent variable is modeled as a function of these columns. Because all observations at a given category level have the same value across the set of predictors, the predicted value of the dependent variable, Y, will be identical for all observations within a category. The set of category columns (and a column for an intercept) are now used as input into a regression model. Lecture #10-11/5/2008 Slide 34 of 54

Effect Coded Regression Example Example: Effect Coded Example 1 Fixed Effects Linear Model Y X 1 X 2 Group 20 1 1 E 18 1 1 E 17 1 1 E 17 1 1 E 13 1 1 E 10 1-1 C 12 1-1 C 11 1-1 C 15 1-1 C 17 1-1 C Mean 15 1 0 Lecture #10-11/5/2008 Slide 35 of 54

Effect Coded Regression The General Linear Model states that the estimated regression parameters are given by: b = (X X) 1 X y Example: Effect Coded Example 1 Fixed Effects Linear Model Lecture #10-11/5/2008 Slide 36 of 54

Effect Coded Regression - X 1 and X 2 For our second example analysis, consider the regression of Y on X 1 and X 2. a = 15 b 2 = 2 y2 = 100 Y = a + b 2 X 2 + e Example: Effect Coded Example 1 Fixed Effects Linear Model SS res = X X = 60 SS reg = 100 60 = 40 R 2 = 40 100 = 0.4 Lecture #10-11/5/2008 Slide 37 of 54

Effect Coded Regression - X 1 and X 2 Example: Effect Coded Example 1 Fixed Effects Linear Model a = 15 is the overall mean of the dependent variable across all categories. b 2 = 2 is the called the effect of the experimental group. This effect represents the difference between the experimental group mean and the overall mean. For members of the E category: Y = a + b 2 X 2 = 15 + 2(1) = 17 For members of the C category: Y = a + b 2 X 2 = 15 + 2( 1) = 13 The fit of the model is the same as was found in the dummy coding from the previous class. Lecture #10-11/5/2008 Slide 38 of 54

The Fixed Effects Linear Model Effect coding is built to estimate the fixed linear effects model. Example: Effect Coded Example 1 Fixed Effects Linear Model Y ij = µ + β j + ǫ ij Y ij is the value of the dependent variable of individual i in group/treatment/category j. µ is the population (grand) mean. β j is the effect of group/treatment/category j. ǫ ij is the error associated with the score of individual i in group/treatment/category j. Lecture #10-11/5/2008 Slide 39 of 54

The Fixed Effects Linear Model The fixed effects linear model states that a predicted score for an observation is a composite of the grand mean and the treatment effect of the group to which the observation belongs. Example: Effect Coded Example 1 Fixed Effects Linear Model Y ij = µ + β j + ǫ ij For all category levels (total represented by G), the model has the following constraint: G β g = 0 g=1 Lecture #10-11/5/2008 Slide 40 of 54

The Fixed Effects Linear Model This constraint means that the effect for the omitted category level (o) is equal to: β o = g o β g = β 1 β 2... From the example, the effect for the control group is equal to: Example: Effect Coded Example 1 Fixed Effects Linear Model Just to verify: β C = β E = 2 β E + β C = 2 + ( 2) = 0 Lecture #10-11/5/2008 Slide 41 of 54

Hypothesis Test of the Regression Coefficien Because each model had the same value for R 2 and the same number of degrees of freedom for the regression (1), all hypothesis tests of the model parameters will result in the same value of the test statistic. F = R 2 /k (1 R 2 )/(N k 1) = 0.4/1 (1 0.4)/(10 1 1) = 5.33 Example: Effect Coded Example 1 Fixed Effects Linear Model From Excel ( =fdist(5.33,1,8) ), p = 0.0496. If we used a Type-I error rate of 0.05, we would reject the null hypothesis, and conclude the regression coefficient for each analysis would be significantly different from zero. Lecture #10-11/5/2008 Slide 42 of 54

Breakfast Cereal Example Can You Guess? Predicted Values Generalizing the concept of effect coding, we revisit the cereal experiment data. Recall that there were four different types of cereal boxes. A effect coding scheme would involve creation of three new columns, each representing observations from each box type. The choice of omitted category level is arbitrary. Any level can be omitted and you will get the same results...this is due to the equivalence of linear models under effect coding. Lecture #10-11/5/2008 Slide 43 of 54

One-Way Analysis of Variance Just as was the case for the example with two categories, a multiple category regression model with a single categorical independent variable has a direct link to a statistical test you may be familiar with. The regression model tests for mean differences across all pairings of category levels simultaneously. Testing for a difference between multiple groups equates to a one-way ANOVA model (for a model with a single categorical independent variable). Breakfast Cereal Example Can You Guess? Predicted Values Lecture #10-11/5/2008 Slide 44 of 54

Y X 1 X 2 X 3 X 4 Type 11 1 1 0 0 1 17 1 1 0 0 1 16 1 1 0 0 1 14 1 1 0 0 1 15 1 1 0 0 1 12 1 0 1 0 2 10 1 0 1 0 2 15 1 0 1 0 2 19 1 0 1 0 2 11 1 0 1 0 2 23 1 0 0 1 3 20 1 0 0 1 3 18 1 0 0 1 3 17 1 0 0 1 3 19 1 0 0 1 3 27 1-1 -1-1 4 33 1-1 -1-1 4 22 1-1 -1-1 4 26 1-1 -1-1 4 28 1-1 -1-1 4 Mean 18.65 1 0 0 0-44-1

Breakfast Cereal Example Group means: Group Mean 1 14.6 2 13.4 3 19.4 4 27.2 We will omit the final category from our analysis. Y ij = µ + β j + ǫ ij Breakfast Cereal Example Can You Guess? Predicted Values Lecture #10-11/5/2008 Slide 45 of 54

It s the Guess the Parameter Game Breakfast Cereal Example Can You Guess? Predicted Values Group means: Group Mean 1 14.6 2 13.4 3 19.4 4 27.2 Grand mean - 18.65. µ = β 1 = β 2 = β 3 = β 4 = Lecture #10-11/5/2008 Slide 46 of 54

It s the Guess the Parameter Game Breakfast Cereal Example Can You Guess? Predicted Values Group means: Group Mean 1 14.6 2 13.4 3 19.4 4 27.2 Grand mean - 18.65. µ = 18.65 (the grand mean). β 1 = 14.6-18.65 = -4.05 β 2 = 13.4-18.65 = -5.25 β 3 = 19.4-18.65 = 0.75 β 4 = -(-4.05) - (-5.25) - 0.75 = 27.2-18.65 = 8.55 Lecture #10-11/5/2008 Slide 47 of 54

Breakfast Cereal Example Therefore: Breakfast Cereal Example Can You Guess? Predicted Values Ȳ A = Y A = µ + β 1 = 18.65 4.05 = 14.6 Ȳ B = Y B = µ + β 1 = 18.65 5.25 = 13.4 Ȳ C = Y C = µ + β 1 = 18.65 + 0.75 = 19.4 Ȳ D = Y D = µ + β 1 = 18.65 + 8.55 = 27.2 R 2 = 0.788 Lecture #10-11/5/2008 Slide 48 of 54

Hypothesis Test To test that all means are equal to each other (H 0 : µ 1 = µ 2 =... = µ k ) against the hypothesis that at least one mean differs (H 1 : At least one µ µ ), called an omnibus test, the same hypothesis test from before can be used: y 2 = 746.55 SS res = 158.40 SS reg = 746.55 158.40 = 588.15 R 2 = 588.15/746.55 = 0.788 Breakfast Cereal Example Can You Guess? Predicted Values Lecture #10-11/5/2008 Slide 49 of 54

Hypothesis Tests F = R 2 /k (1 R 2 )/(N k 1) = 0.788/3 (1 0.7.88)/(20 3 1) = 19.803 Breakfast Cereal Example Can You Guess? Predicted Values From Excel ( =fdist(19.803,3,16) ), p = 0.00001. If we used a Type-I error rate of 0.05, we would reject the null hypothesis, and conclude that at least one regression coefficient for this analysis would be significantly different from zero. Having a regression coefficient of zero means having zero difference between the mean of one category and the grand mean. Having all regression coefficients of zero means absolutely no difference between any of the means (all means are equal to the grand mean). Lecture #10-11/5/2008 Slide 50 of 54

For a categorical independent variable, a statistically significant R 2 means a rejection of the null hypothesis: H 0 : µ 1 = µ 2 =... = µ g Note that rejection simply means that at least one of the above = signs is truly a. To determine which means are not equal, one of the multiple comparison procedures must be applied. Concerns Lecture #10-11/5/2008 Slide 51 of 54

Comparison Concerns The topic of multiple comparisons brings up a wealth of concerns, both from philosophical and statistical points of view. Concerns Most concerns are centered around the potential for an exponential number of post-hoc comparisons, for g groups: ( ) g 2 The phrase capitalization on chance is frequently used to describe many concerns. Even with these concerns, most people still use multiple comparisons for information regarding their analysis. Like most other statistical techniques, know the limitations of a technique is often as important as knowing the results of a technique. Lecture #10-11/5/2008 Slide 52 of 54

Final Thought Regression with categorical variables can be accomplished by coding schemes. Differing ways of coding (or inclusion of certain coded column vectors) may change the interpretation of the model parameters, but will not change the overall fit of the model. Final Thought Next Class Lecture #10-11/5/2008 Slide 53 of 54

Next Time Lab tonight: Regression with a categorical IV. Homework: Due Wednesday, 11/12, at the start of class. Next week: Chapter 12 - more than one categorical IV. Final Thought Next Class Lecture #10-11/5/2008 Slide 54 of 54