5:1LEC - BETWEEN-S FACTORIAL ANOVA

Size: px
Start display at page:

Download "5:1LEC - BETWEEN-S FACTORIAL ANOVA"

Transcription

1 5:1LEC - BETWEEN-S FACTORIAL ANOVA The single-factor Between-S design described in previous classes is only appropriate when there is just one independent variable or factor in the study. Often, however, researchers want to examine the effect of two or more factors at the same time. Such studies generally involve factorial designs in which subjects are randomly or otherwise assigned to all combinations of the factors. For example, a social psychology study involving attitude change about exercise might expose 60 subjects to attitude change messages in one of six conditions defined by two levels of Expertise of the source of information (Expert vs. Nonexpert) and three levels of Threat (Low vs. Medium vs. High). The 60 subjects would ideally be assigned equally to the 2 x 3 = 6 following conditions (or cells) defined by the two factors, with 10 subjects in each cell. Expertise Threat Low Med High Expert EL EM EH Nonexpert NL NM NH The factorial design has a number of benefits over single factor designs. First, it is an efficient use of subjects. In this example, 10 subjects per cell would allow comparisons between Expert and Nonexpert (the main effect of Expertise) to be based on 30 subjects in each condition. Similarly, it would allow comparisons between Low, Medium, and High levels of threat (the main effect of Threat) to be based on 20 subjects per cell. Testing each of these effects in separate studies would require 2 x 60 = 120 subjects to have the same degree of power for both comparisons. More importantly, however, the factorial design allows researchers to study interactions between factors. An interaction has a precise meaning in statistics; it means that the effect of one factor varies or differs across the levels of the other factor. In the present study, Threat might work differently for Expert and Nonexpert presentations. For example, Threat might increase intention to exercise in the Expert condition but have no effect or even decrease intention to exercise in the Nonexpert condition. That is, the effect of Threat (how the Ms vary across the levels of Threat) would differ for the different levels (Expert vs. Nonexpert) of the Expertise factor. Interactions are often very important in psychological research, both for theoretical and practical reasons. Factorial designs vary with respect to whether each factor is Between-Subjects or Within- Subjects. The persuasion study could be done completely Within-S if 10 subjects heard 6 different messages targeting a different behaviour and using all combinations of conditions, or if 60 subjects were tested on something relevant to the dependent variable (e.g., gullibility?), then formed into 10 blocks of 6 subjects each matched on gullibility (i.e., everyone in the block had similar scores), and finally, one of each subject in a block was assigned randomly to each cell in the study. Alternatively, one factor could be Within-S and one Between-S. Here we consider only designs with both factors Between-S. Factorial designs can involve more than two factors; the persuasion study, for example, could include Gender, with half the participants Female and half Male. The study would now involve 2 x 3 x 2 = 12 cells or conditions. This unit addresses only two-factor designs. A final observation although ANOVA is often associated with true experiments in which people are randomly assigned to conditions, factors can in fact be either experimental (e.g., Expertise and Threat) or non-experimental (e.g., Gender). The statistical analysis is the same; however, causal inferences are stronger for well-designed and executed experiments than for non-experimental studies. Poorly designed experiments are no better than non-experiments and sometimes even weaker sources for drawing valid causal inferences. The formulas for two-factor Between-S designs are numbered 34 to 39 on our formula sheet.

2 We will also be using some formula from the Between-S single factor design with appropriate modifications. The basic notation for factorial ANOVA is explained on pages 8.15 to 8.18 of the text. Briefly, the letters A and B stand for the two factors and for the number of levels of each factor. The lowercase letters a and b are variables that index the different levels of A and B; that is, a = 1, 2,..., A and b = 1, 2,..., B. To illustrate, M 23 would denote the mean for the 2 nd level of A (a = 2) and the 3 rd level of B (b = 3), and n 14 would be the sample size for the 1 st level of A and the 4 th level of B. And if factor A has 3 levels and factor B has 4 levels, the number of cells = A x B = 3 x 4 = 12. Calculations for Factorial ANOVA The default factorial ANOVA partitions SS Total as follows (A x B is the interaction between A and B): SS Total = SS A + SS B + SS AxB + SS Error Each of these SSs has a certain df associated with it and can be used to calculate four MSs, one of which is MS Error, to be used as the denominator for three F tests. The remaining three MSs are each associated with specific hypotheses about the means and will be used to calculate three numerators, one associated with each hypothesis. SS A and SS B will be used to test the effects of factors A and B, respectively, averaged over the levels of the other factor. These are called the Main Effects. SS AxB will be used to test the significance of the interaction. The calculations will be illustrated for a 2 x 4 factorial study of mistakes in a selective attention task with various distracting sounds. The sounds were of two Types: random Noise or Speech, and were played at four levels of Volume: Subthreshold, Audible, Normal Speech, and Shouting. A total of 24 subjects participated in the experiment, with 3 subjects assigned to each of the 2 x 4 = 8 cells of the experiment. The results appear in the following table with some calculations to be explained next. Volume (B) Type (A) 1. Subthr 2. Audible 3. Speech 4. Shout M a n a Noise M ab M M M M n ab n 11 3 n 12 3 n 13 3 n 14 3 M n Speech M ab M M M M n ab n 21 3 n 22 3 n 23 3 n 24 3 M n M b M M M M M G 5.0 n b n.1 6 n.2 6 n.3 6 n.4 6 N 24 SD G SS Total. To calculate SS Total, we simply ignore all of the levels of both factors, and think of the dataset as consisting of 24 individual observations with M G = 6.0. We would subtract M G from each of the 24 ys, square the resulting deviations, and sum them up over all of the observations within a group, and then over all levels of the volume and type factors, represented by below. That is,

3 SS Total = (y abi - M G ) 2 = (1-5.0) 2 + (4-5.0) 2 + (7-5.0) (3-5.0) 2 + (5-5.0) 2 + (4-5.0) 2 + (3-5.0) 2 + (2-5.0) 2 + (1-5.0) (6-5.0) 2 + (10-5.0) 2 + (8-5.0) 2 = = df = N - 1 = 24-1 = 23 If the overall standard deviation of the entire set of data is known, as above, then: SS Total = (N - 1)s G 2 = (24-1) = This represents the total variability in the 24 scores, which we want to partition into error, main effect of Type, main effect of Volume, and the interaction between Type and Volume. The df = N - 1 because we subtract one M G from N observations. SS Error. As in the single factor Between-S design, error is the variation of scores within each unique condition around the mean for the condition. In the factorial design, each unique condition is defined by a level of factor A (Type) and a level of factor B (Volume). Thus, there are 8 unique conditions, each with its own M ab and the error is the squared deviations of the three observations about M ab, summed across the three observations, and then across the four levels of B and the two levels of A (i.e., over all 8 conditions). That is, SS Error = (y abi - M ab ) 2 = (1-4.0) 2 + (4-4.0) 2 + (7-4.0) (3-4.0) 2 + (5-4.0) 2 + (4-4.0) 2 + (3-2.0) 2 + (2-2.0) 2 + (1-2.0) (6-8.0) 2 + (10-8.0) 2 + (8-8.0) 2 = = 60.0 = SS ab = SS 11 + SS SS 24 = = (n ab - 1)s ab 2 df = (n ab - 1) = (n 11-1) + (n 12-1) (n 24-1) = (3-1) +... = 16 = N - AxB = 24-2x4 = 24-8 = 16 SS Main Effects. To calculate SS for the main effects of A and B, we treat the study as though the other factor does not exist. Averaged across the Volume factor, Type (or A) produces two means, each based on 12 observations; these are denoted in general as M a. For the first level of A (i.e., a = 1), M 1. = 4.0, and for the second level of A (i.e., a = 2), M 2. = 6.0. The reason for the period (.) in the subscript is to make clear that the 1 and 2 subscripts refer to the levels for A, rather than B. Similarly, averaged across the Type factor, Volume (or B) produces four means, each based on 6 observations; these are denoted in general as M b, giving M.1 = 3.0, M.2 = 4.0, M.3 = 7.0, and M.4 = 6.0. The period in the first position indicates that the subscripts denote the levels of factor B. We now calculate SS A and SS B essentially in the same manner as the single-factor SS for treatment. That is, SS A = n a (M a - M G ) 2 = 12( ) ( ) 2 = 12x x1.0 2 = 24.0 df A = A - 1 = 2-1 = 1 SS B = n b (M b - M G ) 2 = 6( ) 2 + 6( ) 2 + 6( ) 2 + 6( ) 2 = 6x x x x1.0 2 = 60.0 df B = B - 1 = 4-1 = 3

4 SS Interaction. We later consider several ways to compute SS AxB independently, but for now we simply calculate it by subtraction. SS AxB is any variability left over from Total after Error and Main Effects of A and B are calculated; that is, SS AxB = SS Total - SS Error - SS A - SS B = = 36.0 df AxB = df Total - df Error - df A - df B = = 3 We now have everything needed to carry out ANOVA for the Between-S two-factor design, as shown below. The results correspond exactly with those produced by SPSS Source SS df MS F F.05 Type Rej H0 Volume Rej H0 T x V ?? Error Total SPSS Analyses for Between-S Factorial Design The standard way to enter data for the Between-S Two-Factor design is to enter three values for each case: one value represents the level of factor A, another value represents the level of factor B, and the final value represents the dependent variable in the study. The following syntax creates a data set with 24 rows and 3 values for each row or case, which is analyzed by subsequent ANOVAs. DATA LIST FREE / typ vol mis. BEGIN DATA END DATA. GLM err BY typ vol /PRINT = DESCR /PLOT = PROFILE(vol BY typ). typ vol Mean Std. Deviation N Total Total Total Total

5 Tests of Between-Subjects Effects Corrected Model (a) SS A + SS B + SS AxB Intercept x (5.0-0) 2 typ vol typ * vol Error Total SS Total Corrected Total SS Total a R Squared =.667 (Adjusted R Squared =.521) R 2 = 120.0/180.0 The primary GLM results agree with our earlier calculations of SSs and dfs. Dividing SS by df produces the four means squares required to test our three hypotheses. MS Error = 60.0 / 16 = 3.75, which serves as the denominator for the three critical tests. The notes above explain some of the secondary quantities calculated by GLM. The main effect tests, lead to the following conclusions: Reject H 0 : Noise = Speech and Reject H 0 : Vol-1 = Vol-2 = Vol-3 = Vol-4 In both cases, we accept the H a that one or more equality is false. The conclusion about the interaction is somewhat ambiguous; the F is very close to significant, p =.052, and the pattern shown in the graph indicates an interaction. Specifically, the effect of Volume level is quite different for the Noise and Speech distractors, having a much more marked effect for Speech. Equivalently, the difference between Speech and Noise is more marked for some levels of Volume (i.e., 3 and 4) than for other levels of Volume (1 and 2). Moreover, closer examination of interaction analyses will reveal that the standard F test for the interaction can be insensitive for many observed interactions. The following MANOVA analysis duplicates the various quantities calculated earlier and produced by GLM. MANOVA mis BY typ(1 2) vol(1 4) /PRINT = CELL. FACTOR CODE Mean Std. Dev. N 95 percent ConfInt typ 1 vol vol vol vol typ 2 vol vol vol vol For entire sample Source of Variation SS DF MS F Sig of F WITHIN CELLS typ vol typ BY vol (Model) (Total) R-Squared =.667 Adjusted R-Squared =.521

6 Using SPSS to Compute Various Sss There are various ways to get SPSS to help with calculating SSs for the Between-S factorial. One informative way is to use GLM s ability to compute and save predicted (and residual) values. The next few pages illustrate the process. The various scores being produced are shown after the final analysis. The actual ANOVAs are not shown. GLM mis /SAVE PRED(mg). Saves M G as mg... COMPUTE tot = mis - mg. y - M G COMPUTE tot2 = tot**2. DESCR tot tot2 /STAT = SUM. N Sum tot tot = SS Total GLM mis BY typ vol /SAVE PRED(mab). Saves M ab as mab... COMPUTE err = mis - mab. y - M ab COMPUTE err2 = err**2. DESCR err err2 /STAT = SUM. N Sum err err = SS Error GLM mis BY typ /SAVE PRED(ma). Saves M a as ma... COMPUTE amain = ma - mg. M a - M G COMPUTE amain2 = amain**2. DESCR amain amain2 /STAT = SUM. N Sum amain amain = SS A = SS Type GLM mis BY vol /SAVE PRED(mb). Saves M b as mb... COMPUTE bmain = mb - mg. M b - M G COMPUTE bmain2 = bmain**2. DESCR bmain bmain2 /STAT = SUM. N Sum bmain bmain = SS B = SS Volume

7 The following table shows the primary variables created by the preceding commands. FORMAT mg tot mab err ma amain mb bmain (F5.1). LIST typ vol mis mg tot mab err ma amain mb bmain. y M G y-m g M ab y-m ab M a M a -M G M b M b -M G typ vol mis mg tot mab err ma amain mb bmain =SS Total 2 =SS Error 2 =SS Type 2 =SS Volume Computing SS for the Interaction Calculating the interaction by subtraction demonstrates that the interaction is variability that cannot be accounted for by main effects or error, which is technically correct but perhaps has limited intuitive meaning. But it is possible to calculate the interaction in ways that reveals somewhat more clearly what SS Interaction represents. Akin to the subtraction of SSs, for example, we could calculate interaction by determining if there is any variability left in the cell means (the M ab s) when the main effects are removed. Alternatively, we could determine whether the observed cell means differ from the cell means expected if there were only main effects and no interaction. Both approaches lead to the same conclusion. Below are the 8 cell means (M ab s) calculated earlier, along with the row (M a s) and column (M b s) means, and the grand mean (M G ). Also, the effect of A for a = 1 and a = 2 is shown in the column headed M a - M G, and the effect of B for b = 1 to b = 4 is shown in the row labelled M b - M G. These are the effects that need to be subtracted from the cell means to remove main effects. The subsequent two tables show the results of subtracting these main effects. The left hand table shows the result from subtracting the main effects of factor B. The two original means for Volume 1 (i.e., b = 1) are adjusted by subtracting the B effect of Volume 1, which is - 2.0; to illustrate, M 11 = 4.0, and M 11 '= = 6.0, and M 21 = 2.0, and M 21 '= = 4.0. The cell means for Volumes 2, 3, and 4 are adjusted by subtracting their Volume effects: -1.0, +2.0, and +1.0, respectively. The resulting cell means are denoted M ab to indicate one effect has been adjusted. Note that when the main effect means for B are recalculated using these adjusted cell means, all the M b s = 5.0 = M G ; that is, there is no longer any main effect of B. The table to the right removes the main effects for A from the M ab s in a similar manner. Now all the row and column means (i.e., the main effect means) equal M G. If there were no interaction, then the 8 adjusted cell means would also equal M G = 5.0. But they do not, indicating the presence of

8 interaction in the data. That is, there is variability due to the unique combination of specific levels of Factor A and specific levels of Factor B. M ab s Volume (B) Type M a M a -M G (A) M b M G 5.0 M b -M G M ab = M ab s-(m b -M G ) Vol (B) M ab = M ab s-(m a -M G )Vol(B) Type M a M a -M G Type M a (A) (A) M b M G 5.0 M b M G 5.0 The final step in the calculation of SS Interaction is to calculate the deviation of each M ab from M G, square the deviations, multiply by the number of observations in each cell (i.e., n ab ), and sum to get a total. The resulting deviations are shown in the next table. We discuss shortly what these deviations represent. M ab - M G Volume Type Therefore, SS Interaction = 3( ) = 36.0, the same value we obtained earlier by subtraction of SSs. The above operations are represented by the second formula 38 on the formula sheet. The second approach to SS Interaction is to calculate what the cell means would have been if there were only main effects and no interaction, and then determine how much the observed cell means (the M ab s) deviate from the no interaction cell means. These deviations, which will equal those we just computed, will be squared, multiplied by n ab, and summed to obtain SS Interaction. What this approach makes clear is that SS Interaction represents how much the observed cell means deviate from those expected if there were no interaction. To determine the expected cell means given no interaction, we simply add the main effects of A and B to the grand mean, as illustrated in the left table below. The deviations of observed cell means (M ab ) from these predicted cell means if no interaction are shown in the right table and agree with those above. M ab = M G + (M a - M G ) + (M b - M G ) M ab - M ab Volume (B) Volume (B) Type M a M a -M G Type (A) (A) M b M G 5.0 M b -M G

9 The critical thing to note about the predicted cell means above is that the main effects of A are exactly the same at every level of B. That is, the difference between Noise and Speech is exactly 2.0 units for Volumes 1, 2, 3, and 4, indicating 0 interaction. This could also be stated as the main effect deviations for the two levels of A (i.e., -1 and +1) are exactly the same at every level of Volume. Equivalently, the main effects of B are exactly the same at every level of A. That is, the main effect deviations for Volume (-2, -1, +2, +1) are duplicated exactly for the Noise and Speech conditions. For the noise condition, the deviations of 2, 3, 6, and 5 from the row mean of 4.0 are -2, -1, +2, and +1. For the speech condition, the deviations of 4, 5, 8, and 7 from the row mean of 6.0 are -2, -1, +2, and +1, exactly the same as for the Noise condition and the main effect of Volume. Another way of saying this is that a graph of the M ab s above would produce perfectly parallel lines, as we ll see shortly, indicating no interaction. The interaction deviations in the right table and the earlier table indicate how far away the observed cell means are from perfectly parallel lines (i.e., from 0 interaction). The above operations are given as the first formula 38 on the formula sheet. Getting SPSS to Compute SS Interaction We earlier did much of the work necessary for GLM to compute SS Interaction when we computed the main effects and error using GLM. Reviewing the earlier commands will show that we had GLM save predicted scores for M G, M ab, M a, and M b and that we further calculated M a - M G (called amain) and M b - M G (called bmain). These are all the elements needed to compute the interaction according to the two procedures described above, first removing main effects and second generating predicted cell means for only main effects. Two ways are shown to obtain predicted cell means with only main effects, one using COMPUTE statements and a second method using the /DESIGN option on GLM. The default factorial GLM tests main effects and interaction and corresponds to /DESIGN typ vol typ BY vol. Below, an explicit /DESIGN typ vol instructs GLM to calculate main effects only and use only those main effects to generate predicted scores. COMPUTE mabsubmain = mab - amain - bmain. subtract main effects from cell means COMPUTE intone = mabsubmain - mg. COMPUTE intone2 = intone**2. DESCR intone2 /STAT = SUM. N Sum intone COMPUTE mabmain = mg + amain + bmain. COMPUTE inttwo = mab - mabmain. COMPUTE inttwo2 = inttwo**2. add main effects to grand mean DESCR inttwo2 /STAT = SUM. N Sum inttwo *alternative way to get expected cell means based on just main effects. GLM mis BY typ vol /SAVE PRED(mabmaintwo) /DESIGN typ vol. Corrected Model (a) Intercept typ vol Error Total Corrected Total

10 FORMAT typ vol (F1.0) mis (F2.0) mg TO mabmaintwo (F4.1). LIST. typ vol mis mg mab ma amain mb bmain mabsubmain intone intone2 mabmain inttwo inttwo2 mabmaintwo =SS Interaction 2 =SS Interaction Graphs and Interaction There are several ways to graph the results of factorial ANOVA, and graphs are often the best way to show interaction effects. The basic factorial graph for a two-factor study uses the horizontal axis for one of the factors (usually the one with the most levels, here the vol factor with four levels), and separate lines + markers for the other factors (here typ, with two levels). The graph presented earlier was created by the following GLM command, specifically the /PLOT = PROFILE(effects) option. The effects can be either main effects (e.g., PROFILE(vol)) or interactions, as shown here. GLM mis BY typ vol /PRINT = DESCR /PLOT = PROFILE(vol BY typ). This same graph could have been produced from menus by Graph Line Multiple Define, which brings up the dialogue box shown to the right. Users then specify which factor goes on the Category Axis (vol has been inserted here) and which factor is used to Define Lines (typ here). Users also have the option of what to plot; here we want to plot the means for mis, our dependent variable. Clicking Ok will create the basic graph, which would generally be edited further in the chart editor. Understanding of the factorial ANOVA, especially the interaction, can benefit from plotting some of the other quantities that we have produced, namely the

11 expected cell means with just main effects and no interaction, and the adjusted cell means with main effects removed (i.e., pure interaction). These graphs are shown below. The earlier graph of the means is reproduced for comparison. The original (left) and main effects only (right) graphs illustrate the logic of the interaction deviations. In essence each deviation represents how far its cell mean is from where it would be if there were only main effects and no interaction, as represented in the right hand graph. M 11 = 2.0 (i.e., speech, vol 1), for example, is 2 units lower than it should be (4.0) if there were no interaction. The sum of all these squared deviations represents the total evidence for an interaction between volume and type of distractor in this study. The third graph shows the adjusted cell means with main effects removed. This graph represents pure interaction.

12 Examples of Factorial ANOVA Outcomes The following analyses illustrate some of the possible outcomes for a factorial 2 x 3 design. GLM y1 BY a b /PRINT = DESCR Total Total Total Total Total Corrected Model.000(a) a b a * b Total Corrected Total a R Squared =.000 (Adjusted R Squared = -.208) GLM y2 BY a b /PRINT = DESCR Total Total Total Total Total Source Type III Sum of df Mean Sq uare F Sig. Corrected Model (a) a b a * b Total Corrected Total a R Squared =.250 (Adjusted R Squared =.094)

13 GLM y3 BY a b /PRINT = DESCR Total Total Corrected Model (a) a b a * b Total Corrected Total a R Squared =.333 (Adjusted R Squared =.194) GLM y4 BY a b /PRINT = DESCR Total Total Total Total Corrected Model (a) a b a * b Total Corrected Total a R Squared =.455 (Adjusted R Squared =.341)

14 GLM y5 BY a b /PRINT = DESCR Total Total Total Corrected Model (a) a b a * b Total Corrected Total a R Squared =.250 (Adjusted R Squared =.094) GLM y6 BY a b /PRINT = DESCR Total Total Total Total Total Corrected Model (a) a b a * b Total Corrected Total a R Squared =.400 (Adjusted R Squared =.275)

15 GLM y7 BY a b /PRINT = DESCR Total Total Total Total Corrected Model (a) a b a * b Total Corrected Total a R Squared =.455 (Adjusted R Squared =.341) GLM y8 BY a b /PRINT = DESCR Total Total Total Total Corrected Model (a) a b a * b Total Corrected Total a R Squared =.538 (Adjusted R Squared =.442)

CHAPTER 7 - FACTORIAL ANOVA

CHAPTER 7 - FACTORIAL ANOVA Between-S Designs Factorial 7-1 CHAPTER 7 - FACTORIAL ANOVA Introduction to Factorial Designs................................................. 2 A 2 x 2 Factorial Example.......................................................

More information

4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES

4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES 4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES FOR SINGLE FACTOR BETWEEN-S DESIGNS Planned or A Priori Comparisons We previously showed various ways to test all possible pairwise comparisons for

More information

Review of Multiple Regression

Review of Multiple Regression Ronald H. Heck 1 Let s begin with a little review of multiple regression this week. Linear models [e.g., correlation, t-tests, analysis of variance (ANOVA), multiple regression, path analysis, multivariate

More information

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

One-Way ANOVA. Some examples of when ANOVA would be appropriate include: One-Way ANOVA 1. Purpose Analysis of variance (ANOVA) is used when one wishes to determine whether two or more groups (e.g., classes A, B, and C) differ on some outcome of interest (e.g., an achievement

More information

Using SPSS for One Way Analysis of Variance

Using SPSS for One Way Analysis of Variance Using SPSS for One Way Analysis of Variance This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests. This tutorial

More information

Two-Way ANOVA. Chapter 15

Two-Way ANOVA. Chapter 15 Two-Way ANOVA Chapter 15 Interaction Defined An interaction is present when the effects of one IV depend upon a second IV Interaction effect : The effect of each IV across the levels of the other IV When

More information

y response variable x 1, x 2,, x k -- a set of explanatory variables

y response variable x 1, x 2,, x k -- a set of explanatory variables 11. Multiple Regression and Correlation y response variable x 1, x 2,, x k -- a set of explanatory variables In this chapter, all variables are assumed to be quantitative. Chapters 12-14 show how to incorporate

More information

In a one-way ANOVA, the total sums of squares among observations is partitioned into two components: Sums of squares represent:

In a one-way ANOVA, the total sums of squares among observations is partitioned into two components: Sums of squares represent: Activity #10: AxS ANOVA (Repeated subjects design) Resources: optimism.sav So far in MATH 300 and 301, we have studied the following hypothesis testing procedures: 1) Binomial test, sign-test, Fisher s

More information

Module 8: Linear Regression. The Applied Research Center

Module 8: Linear Regression. The Applied Research Center Module 8: Linear Regression The Applied Research Center Module 8 Overview } Purpose of Linear Regression } Scatter Diagrams } Regression Equation } Regression Results } Example Purpose } To predict scores

More information

Designing Multilevel Models Using SPSS 11.5 Mixed Model. John Painter, Ph.D.

Designing Multilevel Models Using SPSS 11.5 Mixed Model. John Painter, Ph.D. Designing Multilevel Models Using SPSS 11.5 Mixed Model John Painter, Ph.D. Jordan Institute for Families School of Social Work University of North Carolina at Chapel Hill 1 Creating Multilevel Models

More information

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES BIOL 458 - Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES PART 1: INTRODUCTION TO ANOVA Purpose of ANOVA Analysis of Variance (ANOVA) is an extremely useful statistical method

More information

Keppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares

Keppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares Keppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares K&W introduce the notion of a simple experiment with two conditions. Note that the raw data (p. 16)

More information

Chapter 9 FACTORIAL ANALYSIS OF VARIANCE. When researchers have more than two groups to compare, they use analysis of variance,

Chapter 9 FACTORIAL ANALYSIS OF VARIANCE. When researchers have more than two groups to compare, they use analysis of variance, 09-Reinard.qxd 3/2/2006 11:21 AM Page 213 Chapter 9 FACTORIAL ANALYSIS OF VARIANCE Doing a Study That Involves More Than One Independent Variable 214 Types of Effects to Test 216 Isolating Main Effects

More information

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means 4.1 The Need for Analytical Comparisons...the between-groups sum of squares averages the differences

More information

Taguchi Method and Robust Design: Tutorial and Guideline

Taguchi Method and Robust Design: Tutorial and Guideline Taguchi Method and Robust Design: Tutorial and Guideline CONTENT 1. Introduction 2. Microsoft Excel: graphing 3. Microsoft Excel: Regression 4. Microsoft Excel: Variance analysis 5. Robust Design: An Example

More information

WISE Regression/Correlation Interactive Lab. Introduction to the WISE Correlation/Regression Applet

WISE Regression/Correlation Interactive Lab. Introduction to the WISE Correlation/Regression Applet WISE Regression/Correlation Interactive Lab Introduction to the WISE Correlation/Regression Applet This tutorial focuses on the logic of regression analysis with special attention given to variance components.

More information

1 A Review of Correlation and Regression

1 A Review of Correlation and Regression 1 A Review of Correlation and Regression SW, Chapter 12 Suppose we select n = 10 persons from the population of college seniors who plan to take the MCAT exam. Each takes the test, is coached, and then

More information

Biostatistics: Correlations

Biostatistics: Correlations Biostatistics: s One of the most common errors we find in the press is the confusion between correlation and causation in scientific and health-related studies. In theory, these are easy to distinguish

More information

Using Microsoft Excel

Using Microsoft Excel Using Microsoft Excel Objective: Students will gain familiarity with using Excel to record data, display data properly, use built-in formulae to do calculations, and plot and fit data with linear functions.

More information

psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests

psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests last lecture: introduction to factorial designs next lecture: factorial between-ps ANOVA II: (effect sizes and follow-up tests) 1 general

More information

This module focuses on the logic of ANOVA with special attention given to variance components and the relationship between ANOVA and regression.

This module focuses on the logic of ANOVA with special attention given to variance components and the relationship between ANOVA and regression. WISE ANOVA and Regression Lab Introduction to the WISE Correlation/Regression and ANOVA Applet This module focuses on the logic of ANOVA with special attention given to variance components and the relationship

More information

Independent Samples ANOVA

Independent Samples ANOVA Independent Samples ANOVA In this example students were randomly assigned to one of three mnemonics (techniques for improving memory) rehearsal (the control group; simply repeat the words), visual imagery

More information

ANOVA in SPSS. Hugo Quené. opleiding Taalwetenschap Universiteit Utrecht Trans 10, 3512 JK Utrecht.

ANOVA in SPSS. Hugo Quené. opleiding Taalwetenschap Universiteit Utrecht Trans 10, 3512 JK Utrecht. ANOVA in SPSS Hugo Quené hugo.quene@let.uu.nl opleiding Taalwetenschap Universiteit Utrecht Trans 10, 3512 JK Utrecht 7 Oct 2005 1 introduction In this example I ll use fictitious data, taken from http://www.ruf.rice.edu/~mickey/psyc339/notes/rmanova.html.

More information

LECTURE 15: SIMPLE LINEAR REGRESSION I

LECTURE 15: SIMPLE LINEAR REGRESSION I David Youngberg BSAD 20 Montgomery College LECTURE 5: SIMPLE LINEAR REGRESSION I I. From Correlation to Regression a. Recall last class when we discussed two basic types of correlation (positive and negative).

More information

WELCOME! Lecture 13 Thommy Perlinger

WELCOME! Lecture 13 Thommy Perlinger Quantitative Methods II WELCOME! Lecture 13 Thommy Perlinger Parametrical tests (tests for the mean) Nature and number of variables One-way vs. two-way ANOVA One-way ANOVA Y X 1 1 One dependent variable

More information

Simple Linear Regression: One Quantitative IV

Simple Linear Regression: One Quantitative IV Simple Linear Regression: One Quantitative IV Linear regression is frequently used to explain variation observed in a dependent variable (DV) with theoretically linked independent variables (IV). For example,

More information

Advanced Quantitative Data Analysis

Advanced Quantitative Data Analysis Chapter 24 Advanced Quantitative Data Analysis Daniel Muijs Doing Regression Analysis in SPSS When we want to do regression analysis in SPSS, we have to go through the following steps: 1 As usual, we choose

More information

Factorial Independent Samples ANOVA

Factorial Independent Samples ANOVA Factorial Independent Samples ANOVA Liljenquist, Zhong and Galinsky (2010) found that people were more charitable when they were in a clean smelling room than in a neutral smelling room. Based on that

More information

Analysis of Covariance. The following example illustrates a case where the covariate is affected by the treatments.

Analysis of Covariance. The following example illustrates a case where the covariate is affected by the treatments. Analysis of Covariance In some experiments, the experimental units (subjects) are nonhomogeneous or there is variation in the experimental conditions that are not due to the treatments. For example, a

More information

10/31/2012. One-Way ANOVA F-test

10/31/2012. One-Way ANOVA F-test PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic 3.Distribution 4. Assumptions One-Way ANOVA F-test One factor J>2 independent samples

More information

Chapter 7: Correlation

Chapter 7: Correlation Chapter 7: Correlation Oliver Twisted Please, Sir, can I have some more confidence intervals? To use this syntax open the data file CIr.sav. The data editor looks like this: The values in the table are

More information

LAB 5 INSTRUCTIONS LINEAR REGRESSION AND CORRELATION

LAB 5 INSTRUCTIONS LINEAR REGRESSION AND CORRELATION LAB 5 INSTRUCTIONS LINEAR REGRESSION AND CORRELATION In this lab you will learn how to use Excel to display the relationship between two quantitative variables, measure the strength and direction of the

More information

ANCOVA. Psy 420 Andrew Ainsworth

ANCOVA. Psy 420 Andrew Ainsworth ANCOVA Psy 420 Andrew Ainsworth What is ANCOVA? Analysis of covariance an extension of ANOVA in which main effects and interactions are assessed on DV scores after the DV has been adjusted for by the DV

More information

Interactions and Centering in Regression: MRC09 Salaries for graduate faculty in psychology

Interactions and Centering in Regression: MRC09 Salaries for graduate faculty in psychology Psychology 308c Dale Berger Interactions and Centering in Regression: MRC09 Salaries for graduate faculty in psychology This example illustrates modeling an interaction with centering and transformations.

More information

Introducing Generalized Linear Models: Logistic Regression

Introducing Generalized Linear Models: Logistic Regression Ron Heck, Summer 2012 Seminars 1 Multilevel Regression Models and Their Applications Seminar Introducing Generalized Linear Models: Logistic Regression The generalized linear model (GLM) represents and

More information

9. Linear Regression and Correlation

9. Linear Regression and Correlation 9. Linear Regression and Correlation Data: y a quantitative response variable x a quantitative explanatory variable (Chap. 8: Recall that both variables were categorical) For example, y = annual income,

More information

Difference in two or more average scores in different groups

Difference in two or more average scores in different groups ANOVAs Analysis of Variance (ANOVA) Difference in two or more average scores in different groups Each participant tested once Same outcome tested in each group Simplest is one-way ANOVA (one variable as

More information

Do not copy, post, or distribute. Independent-Samples t Test and Mann- C h a p t e r 13

Do not copy, post, or distribute. Independent-Samples t Test and Mann- C h a p t e r 13 C h a p t e r 13 Independent-Samples t Test and Mann- Whitney U Test 13.1 Introduction and Objectives This chapter continues the theme of hypothesis testing as an inferential statistical procedure. In

More information

Sleep data, two drugs Ch13.xls

Sleep data, two drugs Ch13.xls Model Based Statistics in Biology. Part IV. The General Linear Mixed Model.. Chapter 13.3 Fixed*Random Effects (Paired t-test) ReCap. Part I (Chapters 1,2,3,4), Part II (Ch 5, 6, 7) ReCap Part III (Ch

More information

Ron Heck, Fall Week 8: Introducing Generalized Linear Models: Logistic Regression 1 (Replaces prior revision dated October 20, 2011)

Ron Heck, Fall Week 8: Introducing Generalized Linear Models: Logistic Regression 1 (Replaces prior revision dated October 20, 2011) Ron Heck, Fall 2011 1 EDEP 768E: Seminar in Multilevel Modeling rev. January 3, 2012 (see footnote) Week 8: Introducing Generalized Linear Models: Logistic Regression 1 (Replaces prior revision dated October

More information

Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology

Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology Data_Analysis.calm Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology This article considers a three factor completely

More information

CHAPTER 10. Regression and Correlation

CHAPTER 10. Regression and Correlation CHAPTER 10 Regression and Correlation In this Chapter we assess the strength of the linear relationship between two continuous variables. If a significant linear relationship is found, the next step would

More information

Inferences for Regression

Inferences for Regression Inferences for Regression An Example: Body Fat and Waist Size Looking at the relationship between % body fat and waist size (in inches). Here is a scatterplot of our data set: Remembering Regression In

More information

One- factor ANOVA. F Ra5o. If H 0 is true. F Distribu5on. If H 1 is true 5/25/12. One- way ANOVA: A supersized independent- samples t- test

One- factor ANOVA. F Ra5o. If H 0 is true. F Distribu5on. If H 1 is true 5/25/12. One- way ANOVA: A supersized independent- samples t- test F Ra5o F = variability between groups variability within groups One- factor ANOVA If H 0 is true random error F = random error " µ F =1 If H 1 is true random error +(treatment effect)2 F = " µ F >1 random

More information

Inferences About the Difference Between Two Means

Inferences About the Difference Between Two Means 7 Inferences About the Difference Between Two Means Chapter Outline 7.1 New Concepts 7.1.1 Independent Versus Dependent Samples 7.1. Hypotheses 7. Inferences About Two Independent Means 7..1 Independent

More information

Review of the General Linear Model

Review of the General Linear Model Review of the General Linear Model EPSY 905: Multivariate Analysis Online Lecture #2 Learning Objectives Types of distributions: Ø Conditional distributions The General Linear Model Ø Regression Ø Analysis

More information

One-way ANOVA (Single-Factor CRD)

One-way ANOVA (Single-Factor CRD) One-way ANOVA (Single-Factor CRD) STAT:5201 Week 3: Lecture 3 1 / 23 One-way ANOVA We have already described a completed randomized design (CRD) where treatments are randomly assigned to EUs. There is

More information

Business Statistics. Lecture 9: Simple Regression

Business Statistics. Lecture 9: Simple Regression Business Statistics Lecture 9: Simple Regression 1 On to Model Building! Up to now, class was about descriptive and inferential statistics Numerical and graphical summaries of data Confidence intervals

More information

Simple, Marginal, and Interaction Effects in General Linear Models: Part 1

Simple, Marginal, and Interaction Effects in General Linear Models: Part 1 Simple, Marginal, and Interaction Effects in General Linear Models: Part 1 PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 2: August 24, 2012 PSYC 943: Lecture 2 Today s Class Centering and

More information

Extensions of One-Way ANOVA.

Extensions of One-Way ANOVA. Extensions of One-Way ANOVA http://www.pelagicos.net/classes_biometry_fa18.htm What do I want You to Know What are two main limitations of ANOVA? What two approaches can follow a significant ANOVA? How

More information

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 1: August 22, 2012

More information

1 Correlation and Inference from Regression

1 Correlation and Inference from Regression 1 Correlation and Inference from Regression Reading: Kennedy (1998) A Guide to Econometrics, Chapters 4 and 6 Maddala, G.S. (1992) Introduction to Econometrics p. 170-177 Moore and McCabe, chapter 12 is

More information

Univariate analysis. Simple and Multiple Regression. Univariate analysis. Simple Regression How best to summarise the data?

Univariate analysis. Simple and Multiple Regression. Univariate analysis. Simple Regression How best to summarise the data? Univariate analysis Example - linear regression equation: y = ax + c Least squares criteria ( yobs ycalc ) = yobs ( ax + c) = minimum Simple and + = xa xc xy xa + nc = y Solve for a and c Univariate analysis

More information

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model EPSY 905: Multivariate Analysis Lecture 1 20 January 2016 EPSY 905: Lecture 1 -

More information

LAB 3 INSTRUCTIONS SIMPLE LINEAR REGRESSION

LAB 3 INSTRUCTIONS SIMPLE LINEAR REGRESSION LAB 3 INSTRUCTIONS SIMPLE LINEAR REGRESSION In this lab you will first learn how to display the relationship between two quantitative variables with a scatterplot and also how to measure the strength of

More information

Experiment: Oscillations of a Mass on a Spring

Experiment: Oscillations of a Mass on a Spring Physics NYC F17 Objective: Theory: Experiment: Oscillations of a Mass on a Spring A: to verify Hooke s law for a spring and measure its elasticity constant. B: to check the relationship between the period

More information

Upon completion of this chapter, you should be able to:

Upon completion of this chapter, you should be able to: 1 Chaptter 7:: CORRELATIION Upon completion of this chapter, you should be able to: Explain the concept of relationship between variables Discuss the use of the statistical tests to determine correlation

More information

Simple, Marginal, and Interaction Effects in General Linear Models

Simple, Marginal, and Interaction Effects in General Linear Models Simple, Marginal, and Interaction Effects in General Linear Models PRE 905: Multivariate Analysis Lecture 3 Today s Class Centering and Coding Predictors Interpreting Parameters in the Model for the Means

More information

Extensions of One-Way ANOVA.

Extensions of One-Way ANOVA. Extensions of One-Way ANOVA http://www.pelagicos.net/classes_biometry_fa17.htm What do I want You to Know What are two main limitations of ANOVA? What two approaches can follow a significant ANOVA? How

More information

Regression, Part I. - In correlation, it would be irrelevant if we changed the axes on our graph.

Regression, Part I. - In correlation, it would be irrelevant if we changed the axes on our graph. Regression, Part I I. Difference from correlation. II. Basic idea: A) Correlation describes the relationship between two variables, where neither is independent or a predictor. - In correlation, it would

More information

An Introduction to Multilevel Models. PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 25: December 7, 2012

An Introduction to Multilevel Models. PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 25: December 7, 2012 An Introduction to Multilevel Models PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 25: December 7, 2012 Today s Class Concepts in Longitudinal Modeling Between-Person vs. +Within-Person

More information

MEASUREMENT OF THE CHARGE TO MASS RATIO (e/m e ) OF AN ELECTRON

MEASUREMENT OF THE CHARGE TO MASS RATIO (e/m e ) OF AN ELECTRON MEASUREMENT OF THE CHARGE TO MASS RATIO (e/m e ) OF AN ELECTRON Object This experiment will allow you to observe and understand the motion of a charged particle in a magnetic field and to measure the ratio

More information

An area chart emphasizes the trend of each value over time. An area chart also shows the relationship of parts to a whole.

An area chart emphasizes the trend of each value over time. An area chart also shows the relationship of parts to a whole. Excel 2003 Creating a Chart Introduction Page 1 By the end of this lesson, learners should be able to: Identify the parts of a chart Identify different types of charts Create an Embedded Chart Create a

More information

Unit 27 One-Way Analysis of Variance

Unit 27 One-Way Analysis of Variance Unit 27 One-Way Analysis of Variance Objectives: To perform the hypothesis test in a one-way analysis of variance for comparing more than two population means Recall that a two sample t test is applied

More information

Investigating Models with Two or Three Categories

Investigating Models with Two or Three Categories Ronald H. Heck and Lynn N. Tabata 1 Investigating Models with Two or Three Categories For the past few weeks we have been working with discriminant analysis. Let s now see what the same sort of model might

More information

Notes on Maxwell & Delaney

Notes on Maxwell & Delaney Notes on Maxwell & Delaney PSY710 12 higher-order within-subject designs Chapter 11 discussed the analysis of data collected in experiments that had a single, within-subject factor. Here we extend those

More information

Ch. 16: Correlation and Regression

Ch. 16: Correlation and Regression Ch. 1: Correlation and Regression With the shift to correlational analyses, we change the very nature of the question we are asking of our data. Heretofore, we were asking if a difference was likely to

More information

Passing-Bablok Regression for Method Comparison

Passing-Bablok Regression for Method Comparison Chapter 313 Passing-Bablok Regression for Method Comparison Introduction Passing-Bablok regression for method comparison is a robust, nonparametric method for fitting a straight line to two-dimensional

More information

Introduction to Analysis of Variance. Chapter 11

Introduction to Analysis of Variance. Chapter 11 Introduction to Analysis of Variance Chapter 11 Review t-tests Single-sample t-test Independent samples t-test Related or paired-samples t-test s m M t ) ( 1 1 ) ( m m s M M t M D D D s M t n s s M 1 )

More information

Psych 230. Psychological Measurement and Statistics

Psych 230. Psychological Measurement and Statistics Psych 230 Psychological Measurement and Statistics Pedro Wolf December 9, 2009 This Time. Non-Parametric statistics Chi-Square test One-way Two-way Statistical Testing 1. Decide which test to use 2. State

More information

INTRODUCTION TO ANALYSIS OF VARIANCE

INTRODUCTION TO ANALYSIS OF VARIANCE CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two

More information

DISCRETE RANDOM VARIABLES EXCEL LAB #3

DISCRETE RANDOM VARIABLES EXCEL LAB #3 DISCRETE RANDOM VARIABLES EXCEL LAB #3 ECON/BUSN 180: Quantitative Methods for Economics and Business Department of Economics and Business Lake Forest College Lake Forest, IL 60045 Copyright, 2011 Overview

More information

Your schedule of coming weeks. One-way ANOVA, II. Review from last time. Review from last time /22/2004. Create ANOVA table

Your schedule of coming weeks. One-way ANOVA, II. Review from last time. Review from last time /22/2004. Create ANOVA table Your schedule of coming weeks One-way ANOVA, II 9.07 //00 Today: One-way ANOVA, part II Next week: Two-way ANOVA, parts I and II. One-way ANOVA HW due Thursday Week of May Teacher out of town all week

More information

MORE ON SIMPLE REGRESSION: OVERVIEW

MORE ON SIMPLE REGRESSION: OVERVIEW FI=NOT0106 NOTICE. Unless otherwise indicated, all materials on this page and linked pages at the blue.temple.edu address and at the astro.temple.edu address are the sole property of Ralph B. Taylor and

More information

Regression and the 2-Sample t

Regression and the 2-Sample t Regression and the 2-Sample t James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) Regression and the 2-Sample t 1 / 44 Regression

More information

ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4)

ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4) ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4) ERSH 8310 Fall 2007 September 11, 2007 Today s Class The need for analytic comparisons. Planned comparisons. Comparisons among treatment means.

More information

NCSS Statistical Software. Harmonic Regression. This section provides the technical details of the model that is fit by this procedure.

NCSS Statistical Software. Harmonic Regression. This section provides the technical details of the model that is fit by this procedure. Chapter 460 Introduction This program calculates the harmonic regression of a time series. That is, it fits designated harmonics (sinusoidal terms of different wavelengths) using our nonlinear regression

More information

MORE ON MULTIPLE REGRESSION

MORE ON MULTIPLE REGRESSION DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 816 MORE ON MULTIPLE REGRESSION I. AGENDA: A. Multiple regression 1. Categorical variables with more than two categories 2. Interaction

More information

Contrasts (in general)

Contrasts (in general) 10/1/015 6-09/749 Experimental Design for Behavioral and Social Sciences Contrasts (in general) Context: An ANOVA rejects the overall null hypothesis that all k means of some factor are not equal, i.e.,

More information

MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010

MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010 MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010 Part 1 of this document can be found at http://www.uvm.edu/~dhowell/methods/supplements/mixed Models for Repeated Measures1.pdf

More information

16.400/453J Human Factors Engineering. Design of Experiments II

16.400/453J Human Factors Engineering. Design of Experiments II J Human Factors Engineering Design of Experiments II Review Experiment Design and Descriptive Statistics Research question, independent and dependent variables, histograms, box plots, etc. Inferential

More information

Deciphering Math Notation. Billy Skorupski Associate Professor, School of Education

Deciphering Math Notation. Billy Skorupski Associate Professor, School of Education Deciphering Math Notation Billy Skorupski Associate Professor, School of Education Agenda General overview of data, variables Greek and Roman characters in math and statistics Parameters vs. Statistics

More information

Lecture 11: Two Way Analysis of Variance

Lecture 11: Two Way Analysis of Variance Lecture 11: Two Way Analysis of Variance Review: Hypothesis Testing o ANOVA/F ratio: comparing variances o F = s variance between treatment effect + chance s variance within sampling error (chance effects)

More information

Review. One-way ANOVA, I. What s coming up. Multiple comparisons

Review. One-way ANOVA, I. What s coming up. Multiple comparisons Review One-way ANOVA, I 9.07 /15/00 Earlier in this class, we talked about twosample z- and t-tests for the difference between two conditions of an independent variable Does a trial drug work better than

More information

Sociology Exam 2 Answer Key March 30, 2012

Sociology Exam 2 Answer Key March 30, 2012 Sociology 63993 Exam 2 Answer Key March 30, 2012 I. True-False. (20 points) Indicate whether the following statements are true or false. If false, briefly explain why. 1. A researcher has constructed scales

More information

Example. χ 2 = Continued on the next page. All cells

Example. χ 2 = Continued on the next page. All cells Section 11.1 Chi Square Statistic k Categories 1 st 2 nd 3 rd k th Total Observed Frequencies O 1 O 2 O 3 O k n Expected Frequencies E 1 E 2 E 3 E k n O 1 + O 2 + O 3 + + O k = n E 1 + E 2 + E 3 + + E

More information

Statistics for Managers Using Microsoft Excel

Statistics for Managers Using Microsoft Excel Statistics for Managers Using Microsoft Excel 7 th Edition Chapter 1 Chi-Square Tests and Nonparametric Tests Statistics for Managers Using Microsoft Excel 7e Copyright 014 Pearson Education, Inc. Chap

More information

Daniel Boduszek University of Huddersfield

Daniel Boduszek University of Huddersfield Daniel Boduszek University of Huddersfield d.boduszek@hud.ac.uk Introduction to moderator effects Hierarchical Regression analysis with continuous moderator Hierarchical Regression analysis with categorical

More information

where Female = 0 for males, = 1 for females Age is measured in years (22, 23, ) GPA is measured in units on a four-point scale (0, 1.22, 3.45, etc.

where Female = 0 for males, = 1 for females Age is measured in years (22, 23, ) GPA is measured in units on a four-point scale (0, 1.22, 3.45, etc. Notes on regression analysis 1. Basics in regression analysis key concepts (actual implementation is more complicated) A. Collect data B. Plot data on graph, draw a line through the middle of the scatter

More information

ANOVA continued. Chapter 10

ANOVA continued. Chapter 10 ANOVA continued Chapter 10 Zettergren (003) School adjustment in adolescence for previously rejected, average, and popular children. Effect of peer reputation on academic performance and school adjustment

More information

22s:152 Applied Linear Regression. Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA)

22s:152 Applied Linear Regression. Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA) 22s:152 Applied Linear Regression Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA) We now consider an analysis with only categorical predictors (i.e. all predictors are

More information

Regression: Main Ideas Setting: Quantitative outcome with a quantitative explanatory variable. Example, cont.

Regression: Main Ideas Setting: Quantitative outcome with a quantitative explanatory variable. Example, cont. TCELL 9/4/205 36-309/749 Experimental Design for Behavioral and Social Sciences Simple Regression Example Male black wheatear birds carry stones to the nest as a form of sexual display. Soler et al. wanted

More information

Retrieve and Open the Data

Retrieve and Open the Data Retrieve and Open the Data 1. To download the data, click on the link on the class website for the SPSS syntax file for lab 1. 2. Open the file that you downloaded. 3. In the SPSS Syntax Editor, click

More information

Mathematical Notation Math Introduction to Applied Statistics

Mathematical Notation Math Introduction to Applied Statistics Mathematical Notation Math 113 - Introduction to Applied Statistics Name : Use Word or WordPerfect to recreate the following documents. Each article is worth 10 points and should be emailed to the instructor

More information

Statistics and Quantitative Analysis U4320

Statistics and Quantitative Analysis U4320 Statistics and Quantitative Analysis U3 Lecture 13: Explaining Variation Prof. Sharyn O Halloran Explaining Variation: Adjusted R (cont) Definition of Adjusted R So we'd like a measure like R, but one

More information

Chapter 10: Chi-Square and F Distributions

Chapter 10: Chi-Square and F Distributions Chapter 10: Chi-Square and F Distributions Chapter Notes 1 Chi-Square: Tests of Independence 2 4 & of Homogeneity 2 Chi-Square: Goodness of Fit 5 6 3 Testing & Estimating a Single Variance 7 10 or Standard

More information

Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) Analysis of Variance (ANOVA) Two types of ANOVA tests: Independent measures and Repeated measures Comparing 2 means: X 1 = 20 t - test X 2 = 30 How can we Compare 3 means?: X 1 = 20 X 2 = 30 X 3 = 35 ANOVA

More information

Chapter 19: Logistic regression

Chapter 19: Logistic regression Chapter 19: Logistic regression Self-test answers SELF-TEST Rerun this analysis using a stepwise method (Forward: LR) entry method of analysis. The main analysis To open the main Logistic Regression dialog

More information

EDF 7405 Advanced Quantitative Methods in Educational Research MULTR.SAS

EDF 7405 Advanced Quantitative Methods in Educational Research MULTR.SAS EDF 7405 Advanced Quantitative Methods in Educational Research MULTR.SAS The data used in this example describe teacher and student behavior in 8 classrooms. The variables are: Y percentage of interventions

More information

1 Introduction to Minitab

1 Introduction to Minitab 1 Introduction to Minitab Minitab is a statistical analysis software package. The software is freely available to all students and is downloadable through the Technology Tab at my.calpoly.edu. When you

More information