Chs. 15 & 16: Correlation & Regression

Similar documents
Ch. 16: Correlation and Regression

Chs. 16 & 17: Correlation & Regression

Can you tell the relationship between students SAT scores and their college grades?

REVIEW 8/2/2017 陈芳华东师大英语系

Correlation and Linear Regression

Correlation. A statistics method to measure the relationship between two variables. Three characteristics

Correlation: Relationships between Variables

Reminder: Student Instructional Rating Surveys

Key Concepts. Correlation (Pearson & Spearman) & Linear Regression. Assumptions. Correlation parametric & non-para. Correlation

Homework 6. Wife Husband XY Sum Mean SS

Upon completion of this chapter, you should be able to:

9. Linear Regression and Correlation

Statistics Introductory Correlation

Simple Linear Regression: One Quantitative IV

Ordinary Least Squares Regression Explained: Vartanian

Correlation. We don't consider one variable independent and the other dependent. Does x go up as y goes up? Does x go down as y goes up?

Measuring Associations : Pearson s correlation

Stat 20 Midterm 1 Review

Statistics: revision

Sociology 593 Exam 2 Answer Key March 28, 2002

Data files for today. CourseEvalua2on2.sav pontokprediktorok.sav Happiness.sav Ca;erplot.sav

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means

1 A Review of Correlation and Regression

Correlation and regression

Ordinary Least Squares Regression Explained: Vartanian

Intro to Linear Regression

Review of Multiple Regression

Wed, June 26, (Lecture 8-2). Nonlinearity. Significance test for correlation R-squared, SSE, and SST. Correlation in SPSS.

CORRELATION. suppose you get r 0. Does that mean there is no correlation between the data sets? many aspects of the data may a ect the value of r

Module 8: Linear Regression. The Applied Research Center

Lecture 14. Analysis of Variance * Correlation and Regression. The McGraw-Hill Companies, Inc., 2000

Lecture 14. Outline. Outline. Analysis of Variance * Correlation and Regression Analysis of Variance (ANOVA)

Readings Howitt & Cramer (2014) Overview

Multiple Regression. More Hypothesis Testing. More Hypothesis Testing The big question: What we really want to know: What we actually know: We know:

Relationships between variables. Visualizing Bivariate Distributions: Scatter Plots

Uni- and Bivariate Power

1 Correlation and Inference from Regression

9 Correlation and Regression

Spearman Rho Correlation

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

Black White Total Observed Expected χ 2 = (f observed f expected ) 2 f expected (83 126) 2 ( )2 126

Binary Logistic Regression

Psych 230. Psychological Measurement and Statistics

Area1 Scaled Score (NAPLEX) .535 ** **.000 N. Sig. (2-tailed)

Readings Howitt & Cramer (2014)

Chapter 5 Least Squares Regression

Chapter 16: Correlation

Intro to Linear Regression

Interactions. Interactions. Lectures 1 & 2. Linear Relationships. y = a + bx. Slope. Intercept

STAT 350 Final (new Material) Review Problems Key Spring 2016

In Class Review Exercises Vartanian: SW 540

Draft Proof - Do not copy, post, or distribute. Chapter Learning Objectives REGRESSION AND CORRELATION THE SCATTER DIAGRAM

Harvard University. Rigorous Research in Engineering Education

Identify the scale of measurement most appropriate for each of the following variables. (Use A = nominal, B = ordinal, C = interval, D = ratio.

Overview. Overview. Overview. Specific Examples. General Examples. Bivariate Regression & Correlation

CORELATION - Pearson-r - Spearman-rho

ECON 497 Midterm Spring

Sampling Distributions: Central Limit Theorem

Sociology 593 Exam 2 March 28, 2002

We d like to know the equation of the line shown (the so called best fit or regression line).

Repeated-Measures ANOVA in SPSS Correct data formatting for a repeated-measures ANOVA in SPSS involves having a single line of data for each

Review of Statistics 101

Using SPSS for One Way Analysis of Variance

Inferences for Correlation

Chapter 13 Correlation

Hypothesis Testing. We normally talk about two types of hypothesis: the null hypothesis and the research or alternative hypothesis.

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

LOOKING FOR RELATIONSHIPS

12.7. Scattergrams and Correlation

AP Statistics. Chapter 6 Scatterplots, Association, and Correlation

1 Descriptive statistics. 2 Scores and probability distributions. 3 Hypothesis testing and one-sample t-test. 4 More on t-tests

THE PEARSON CORRELATION COEFFICIENT

Slide 7.1. Theme 7. Correlation

11 Correlation and Regression

Business Statistics. Lecture 9: Simple Regression

Regression, Part I. - In correlation, it would be irrelevant if we changed the axes on our graph.

psychological statistics

Basics of Experimental Design. Review of Statistics. Basic Study. Experimental Design. When an Experiment is Not Possible. Studying Relations

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 12: Detailed Analyses of Main Effects and Simple Effects

An Analysis of College Algebra Exam Scores December 14, James D Jones Math Section 01

PhysicsAndMathsTutor.com

Test Yourself! Methodological and Statistical Requirements for M.Sc. Early Childhood Research

Correlation & Regression. Dr. Moataza Mahmoud Abdel Wahab Lecturer of Biostatistics High Institute of Public Health University of Alexandria

Chapter 12 : Linear Correlation and Linear Regression

3.2: Least Squares Regressions

Correlation. What Is Correlation? Why Correlations Are Used

Do not copy, post, or distribute

PS2.1 & 2.2: Linear Correlations PS2: Bivariate Statistics

LECTURE 15: SIMPLE LINEAR REGRESSION I

Chapter 3: Examining Relationships

Chapter 16. Simple Linear Regression and Correlation

Unit 6 - Introduction to linear regression

UNIT 4 RANK CORRELATION (Rho AND KENDALL RANK CORRELATION

Keller: Stats for Mgmt & Econ, 7th Ed July 17, 2006

Association Between Variables Measured at the Interval-Ratio Level: Bivariate Correlation and Regression

CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)

6. CORRELATION SCATTER PLOTS. PEARSON S CORRELATION COEFFICIENT: Definition

Notes 6: Correlation

Regression Analysis. BUS 735: Business Decision Making and Research

Chapter 10. Correlation and Regression. McGraw-Hill, Bluman, 7th ed., Chapter 10 1

Transcription:

Chs. 15 & 16: Correlation & Regression With the shift to correlational analyses, we change the very nature of the question we are asking of our data. Heretofore, we were asking if a difference was likely to exist between our groups as measured on one variable (the dependent variable) after manipulating another variable (the independent variable). In other words, we were testing statistical hypotheses like: H 0 : µ1 = µ2 H 1 : µ 1 µ 2 Now, we are going to test a different set of hypotheses. We are going to assess the extent to which a relationship is likely to exist between two different variables. In this case, we are testing the following statistical hypotheses: H 0 : ρ = 0 H 1 : ρ 0 That is, we are looking to see the extent to which no linear relationship exists between the two variables in the population (ρ = 0). When our data support the alternative hypothesis, we are going to assert that a linear relationship does exist between the two variables in the population. Consider the following data sets. You can actually compute the statistics if you are so inclined. However, simply by eyeballing the data, can you tell me whether a difference exists between the two groups? Whether a relationship exists? The next page shows scattergrams of the data, which make it easier to determine if a relationship is likely to exist. Set A Set B Set C Set D X Y X Y X Y X Y 1 1 1 7 1 101 1 107 2 2 2 3 2 102 2 103 3 3 3 8 3 103 3 108 4 4 4 2 4 104 4 102 5 5 5 5 5 105 5 105 6 6 6 9 6 106 6 109 7 7 7 1 7 107 7 101 8 8 8 6 8 108 8 106 9 9 9 4 9 109 9 104 See the next page for graphs of the data. Chs. 15 & 16: Correlation & Regression - 1

10 Set A y = 0 + 1x R= 1 110 Set C y = 100 + 1x R= 1 8 108 6 106 Y Y 4 104 2 102 0 0 2 4 6 8 10 X 100 0 2 4 6 8 10 X 10 Set B y = 5.8333-0.16667x R= 0.16667 110 Set D y = 105.83-0.16667x R= 0.16667 8 108 6 106 Y Y 4 104 2 102 0 0 2 4 6 8 10 X 100 0 2 4 6 8 10 X Sets A and C illustrate a very strong positive linear relationship. Sets B and D illustrate a very weak linear relationship. Sets A and B illustrate no difference between the two variables (identical means). Sets C and D illustrate large differences between the two variables. With correlational designs, we re not typically going to manipulate a variable. Instead, we ll often just take two measures and determine if they produce a linear relationship. When there is no manipulation, we cannot make causal claims about our results. Correlation vs. Causation It is because of the fact that nothing is being manipulated that you must be careful to interpret any relationship that you find. That is, you should understand that determining that there is a relationship between two variables doesn t tell you anything about how that relationship emerged correlation does not imply causation. If you didn t know anything about a person s IQ, your best guess about that person s GPA would be to guess the typical Chs. 15 & 16: Correlation & Regression - 2

(mean) GPA. Finding that a correlation exists between IQ and GPA simply means that knowing a person s IQ would let you make a better prediction of that person s GPA than simply guessing the mean. You don t know for sure that it s the person s IQ that determined that person s GPA, you simply know that the two covary in a predictable fashion. If you find a relationship between two variables, A and B, it may arise because A directly affects B, it may arise because B directly affects A, or it may arise because an unobserved variable, C, affects both A and B. In this specific example of IQ and GPA, it s probably unlikely that GPA could affect IQ, but it s not impossible. It s more likely that either IQ affects GPA or that some other variable (e.g., test-taking skill, self-confidence, patience in taking exams) affects both IQ and GPA. The classic example of the impact of a third variable on the relationship between two variables is the fact that there is a strong negative linear relationship between the number of mules in a state and the number of college faculty in a state. As the number of mules goes up, the number of faculty goes down (and vice versa). It should be obvious to you that the relationship is not a causal one. The mules are not eating faculty, or otherwise endangering faculty existence. Faculty are not so poorly paid that they are driven to eating mules. The actual relationship likely arises because rural states tend to have fewer institutions of higher education and more farms. More urban states tend to have more institutions of higher education and fewer farms. Thus, the nature of the state is the third variable that produces the relationship between number of mules and number of faculty. As another example of a significant correlation with a third-variable explanation, G&W point out the relationship between number of churches and number of serious crimes. If you can t make causal claims, what is correlation good for? You should note that there are some questions that one cannot approach experimentally typically for ethical reasons. For instance, does smoking cause lung cancer? It would be a fairly simple experiment to design (though maybe not to manage), and it would take a fairly long time to conduct, but the reason that people don t do such research with humans is an ethical one. As G&W note, correlation is useful for prediction (when combined with the regression equation), for assessing reliability and validity, and for theory verification. Chs. 15 & 16: Correlation & Regression - 3

What is correlation? Correlation is a statistical technique that is used to measure and describe a relationship between two variables. Correlations can be positive (the two variables tend to move in the same direction, increasing or decreasing together) or negative (the two variables tend to move in opposite directions, with one increasing as the other decreases). Thus, Data Sets A and C above are both positive linear relationships. How do we measure correlation? The most often used measure of linear relationships is the Pearson product-moment correlation coefficient (r). This statistic is used to estimate the extent of the linear relationship in the population (ρ). The statistic can take on values between 1.0 and +1.0, with r = -1.0 indicating a perfect negative linear relationship and r = +1.-0 indicating a perfect positive linear relationship. Can you predict the correlation coefficients that would be produced by the data shown in the scattergrams below? 16 Regression Plot 16 Regression Plot 14 14 12 12 10 10 B 8 B 8 6 6 4 4 2 2 0 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 2.733 +.885 * X; R^2 =.596 0 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 2.867 +.642 * X; R^2 =.223 B 9 8 7 6 5 4 3 2 1 Regression Plot 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 3.2 +.327 * X; R^2 =.491 B 5.1 5.08 5.06 5.04 5.02 5 4.98 4.96 4.94 4.92 4.9 4.88 Regression Plot 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 5 + 0 * X; R^2 = Chs. 15 & 16: Correlation & Regression - 4

B 10 9 8 7 6 5 4 3 2 1 0 Regression Plot 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 5.267 -.048 * X; R^2 =.001 B 11 10 9 8 7 6 5 4 3 2 1 0 Regression Plot 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 9 -.636 * X; R^2 =.405 B 11 10 9 8 7 6 5 4 3 2 1 0 Regression Plot 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 7.933 -.388 * X; R^2 =.133 B 11 10 9 8 7 6 5 4 3 2 1 0 Regression Plot 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 7.133 -.315 * X; R^2 =.074 Keep in mind that the Pearson correlation coefficient is intended to assess linear relationships. What r would you obtain for the data below? B 5.5 5 4.5 4 3.5 3 2.5 2 1.5 1.5 Regression Plot 0 1 2 3 4 5 6 7 8 9 10 11 A Y = 3-1.8E-17 * X; R^2 = 0 It should strike you that there is a strong relationship between the two variables. However, the relationship is not a linear one. So, don t be misled into thinking that a correlation coefficient of 0 indicates no relationship between two variables. This example is also a good reminder that you should always plot your data points. Chs. 15 & 16: Correlation & Regression - 5

Thus, the Pearson correlation measures the degree and direction of linear relationship between two variables. Conceptually, the correlation coefficient is: r = degree to which X and Y vary together degree to which X and Y vary separately r = covariability of X and Y variability of X and Y separately The stronger the linear relationship between the two variables, the greater the correspondence between changes in the two variables. When there is no linear relationship, there is no covariability between the two variables, so a change in one variable is not associated with a predictable change in the other variable. How to compute the correlation coefficient The covariability of X and Y is measured by the sum of squares of the cross products (SP). The definitional formula for SP is a lot like the formula for the sum of squares (SS). SP = ( X X) Y Y ( ) Expanding the formula for SS will make the comparison clearer: SS = ( X X) 2 = X X ( )( X X) So, instead of using only X, the formula for SP uses both X and Y. The same relationship is evident in the computational formula. SP= XY You should see how this formula is a lot like the computational formula for SS, but with both X and Y represented. X n Y SS = X 2 ( X) 2 n ( X) X = X X n ( ) Once you ve gotten a handle on SP, the rest of the formula for r is straightforward. SP r = SS X SS Y The following example illustrates the computation of the correlation coefficient and how to determine if the linear relationship is significant. Chs. 15 & 16: Correlation & Regression - 6

An Example of Regression/Correlation Analyses Problem: You want to be able to predict performance in college, to see if you should admit a student or not. You develop a simple test, with scores ranging from 0 to 10, and you want to see if it is predictive of GPA (your indicator of performance in college). Statistical Hypotheses: H 0 : ρ = 0 H 1 : ρ 0 Decision Rule: Set α =.05, and with a sample of n = 10 students, your obtained r must exceed.632 to be significant (using Table B.6, df = n-2 = 8, two-tailed test, as seen on the following page). Computation: Simple Test (X) GPA (Y) X 2 Y 2 XY 9 3.0 81 9.0 27.0 7 3.0 49 9.0 21.0 2 1.2 4 1.4 2.4 5 2.0 25 4.0 10.0 8 3.2 64 10.2 25.6 2 1.5 4 2.3 3.0 6 2.7 36 7.3 16.2 3 1.8 9 3.2 5.4 9 3.4 81 11.6 30.6 5 2.5 25 6.3 12.5 Sum 56 24.3 378 64.3 153.7 r = ˆ ρ = SP SS X SS Y = % ' ' & XY ( X) 2 (% X 2 *' n *' )& X Y n ( Y) 2 ( Y 2 * n * ) r = ( )( 24.3) 153.7 56 10 # &# & % 378 562 (% 64.3 24.32 ( $ 10 ' $ 10 ' = 17.62 ( 64.4) 5.25 ( ) =.96 Decision: Because r Obt.632, reject H 0. Interpretation: There is a positive linear relationship between the simple test and GPA. One might also compute the coefficient of determination (r 2 ), which in this case would be.92. The coefficient of determination measures the proportion of variability shared by Y and X, or the extent to which your Y variable is (sort of) explained by the X variable. Chs. 15 & 16: Correlation & Regression - 7

It s good practice to compute the coefficient of determination (r 2 ) as well as r. As G&W note, this statistic evaluates the proportion of variability in one variable that is shared with another variable. You should also recognize r 2 as a measure of effect size. Thus, with large n, even a fairly modest r might be significant. However, the coefficient of determination would be very small, indicating that the relationship, though significant, may not be all that impressive. In other words, a significant linear relationship of r =.3 would produce r 2 =.09, so that the two variables share less than 10% of their variability. An r 2 that low means that other variables are making a greater contribution to the variability (.81, which is 1-r 2 referred to as the coefficient of alienation). Chs. 15 & 16: Correlation & Regression - 8

X Y Z X Z Y Z X Z Y 9 3 1.340 0.786 1.054 7 3 0.552 0.786 0.434 2 1.2-1.418-1.697 2.406 5 2-0.236-0.593 0.140 8 3.2 0.946 1.062 1.005 2 1.5-1.418-1.283 1.819 6 2.7 0.158 0.372 0.059 3 1.8-1.024-0.869 0.890 9 3.4 1.340 1.338 1.793 5 2.5-0.236 0.097-0.023 9.577 Note that the average of the product of the z-scores (9.577 / 10 =.96) is the correlation coefficient, r. Hence, the product-moment part of the name. The typical way to test the significance of the correlation coefficient is to use a table like the one in the back of the text. Another way is to rely on the computer s ability to provide you with a significance test. If you look at the SPSS output, you ll notice that the test of significance is actually an F-ratio. SPSS is computing the F Regression, according to the following formula (as seen in the G&W text): ( r 2 ) SS Y ( ) F Regression = MS Regression = = 94 MS Error ( 1 r 2 )SS Y n 2 We would compare this F-ratio to F Crit (1,n-2) = F Crit (1,8) = 5.32, so we d reject H 0 (that ρ = 0). Some final caveats As indicated earlier, you should get in the habit of producing a scattergram when conducting a correlation analysis. The scattergram is particularly useful for detecting a curvilinear relationship between the two variables. That is, your r value might be low and non-significant, but not because there is no relationship between the two variables, only because there is no linear relationship between the two variables. Another problem that we can often detect by looking at a scattergram is when an outlier (outrider) is present. As seen below on the left, there appears to be little or no relationship between Questionnaire and Observation except for the fact that one participant received a very high score on both variables. Excluding that participant from the analysis would likely lead you to conclude that there is little relationship between the two variables. Including that participant would lead you to conclude that there is a relationship between the two variables. What should you do? Chs. 15 & 16: Correlation & Regression - 9

You also need to be cautious to avoid the restricted range problem. If you have only observed scores over a narrow portion of the potential values that might be obtained, then your interpretation of the relationship between the two variables might well be erroneous. For instance, in the figure above on the right, if you had only looked at people with scores on the Questionnaire of 1-5, you might have thought that there was a negative relationship between the two variables. On the other hand, had you only looked at people with scores of 6-10 on the Questionnaire, you would have been led to believe that there was a positive linear relationship between the Questionnaire and Observation. By looking at the entire range of responses on the Questionnaire, it does appear that there is a positive linear relationship between the two variables. One practice problem For the following data, compute r, r 2, determine if r is significant, and, if so, compute the regression equation and the standard error of estimate. X X 2 Y Y 2 XY Y Y ˆ ( ) ( Y Y ˆ ) 2 1 1 2 4 2 0.7 0.49 3 9 1 1 3-1.5 2.25 5 25 5 25 25 1.3 1.69 7 49 4 16 28-0.9 0.81 8 64 6 36 48 0.5 0.25 4 16 3 9 12-0.1 0.01 Sum 28 164 21 91 118 0 5.50 Chs. 15 & 16: Correlation & Regression - 10

Another Practice Problem Dr. Rob D. Cash is interested in the relationship between body weight and self esteem in women. He gives 10 women the Alpha Sigma Self-Esteem Test and also measures their body weight. Analyze the data as completely as you can. After you ve learned about regression, answer these questions: If a woman weighed 120 lbs., what would be your best prediction of her self-esteem score? What if she weighed 200 lbs.? Participant Body Weight Self-Esteem XY 1 100 39 3900 2 111 47 5217 3 117 54 6318 4 124 23 2852 5 136 35 4760 6 139 30 4170 7 143 48 6864 8 151 20 3020 9 155 28 4340 10 164 46 7544 Sum 1340 370 48985 SS 3814 1214 Chs. 15 & 16: Correlation & Regression - 11

The Regression Equation Given the significant linear relationship between the Simple Test and GPA, we would be justified in computing a regression equation to allow us to make predictions. [Note that had our correlation been non-significant, we would not be justified in computing the regression equation. Then the best prediction of Y would be Y, regardless of the value of X.] The regression equation is: ˆ Y = bx + a To compute the slope (b) and y-intercept (a) we would use the following simple formulas, based on quantities already computed for r (or easily computed from information used in computing r). b = SP SS X a = Y bx For this example, you d obtain: b = 17.62 64.4 =.27 a = 2.43 (.27)(5.6) =.92 So, the regression equation would be: ˆ Y =.27X+.92 You could then use the regression equation to make predictions. For example, suppose that a person scored a 4 on the simple test, what would be your best prediction of future GPA? ˆ Y = (.27)(4)+.92 = 2 Thus, a score of 4 on the simple test would predict a GPA of 2.0. [Note that you cannot predict beyond the range of observed values. Thus, because you ve only observed scores on the simple test of 2 to 9, you couldn t really predict a person s GPA if you knew that his or her score on the simple test was 1, 10, etc.] Below is a scattergram of the data: 3.5 Example y = 0.89783 + 0.2736x R= 0.96092 3 2.5 GPA 2 1.5 1 0 2 4 6 8 10 Simple Test Chs. 15 & 16: Correlation & Regression - 12

X Y ˆ Y Y ˆ Y (Y ˆ Y ) 2 9 3 3.35-0.35.1225 7 3 2.81 0.19.0361 2 1.2 1.46-0.26.0676 5 2 2.27-0.27.0729 8 3.2 3.08 0.12.0144 2 1.5 1.46 0.04.0016 6 2.7 2.54 0.16.0256 3 1.8 1.73 0.07.0049 9 3.4 3.35 0.05.0025 5 2.5 2.27 0.23.0529-0.02.4010 Note that the sum of the Y ˆ Y scores (SS Error ) is nearly zero (off by rounding error). The standard error of estimate is 0.224, which is computed as: Standard error of estimate = SS Error df = ( Y Y ˆ ) 2 n 2 =.401 8 =.224 It s easier to compute SS Error as: SS Error = (1 r 2 )(SS Y ) = (1.96 2 )(5.25) =.41.401 Regression/Correlation Analysis in SPSS (From G&W5) A college professor claims that the scores on the first exam provide an excellent indication of how students will perform throughout the term. To test this claim, first-exam score and final scores were recorded for a sample of n = 12 students in an introductory psychology class. The data would be entered in the usual manner, with First Exam scores going in one column and Final Grade scores going in the second column (seen below left). After entering the data and labeling the variables, you might choose Correlate- >Bivariate from the Analyze menu, which would produce the window seen below right. Chs. 15 & 16: Correlation & Regression - 13

Note that I ve dragged the two variables from the left into the window on the right. Clicking on OK produces the analysis seen below: I hope that you see this output as only moderately informative. That is, you can see the value of r and the two-tailed test of significance (with p =.031), but nothing more. For that reason, I d suggest that you simply skip over this analysis and move right to another choice from the Analyze menu Regression->Linear as seen below left. Choosing linear regression will produce the window seen above on the right. Note that I ve moved the variable for the first exam scores to the Independent variable window. Of course, that s somewhat arbitrary, but the problem suggests that first exam scores would predict final grades, so I d treat those scores as predictor variables. Thus, I moved the Final Grade variable to the Dependent variable window. Clicking on the OK button would produce the output below. First of all, notice that the correlation coefficient, r, is printed as part of the output (though labeled R), as is r 2 (labeled R Square) and the standard error of estimate. SPSS doesn t print the sign of r, so based on this table alone, you couldn t tell if r was positive or negative. The Coefficients table below will show the slope as positive or negative, so look there for the sign. The Coefficients table also shows t-tests (and accompanying Sig. values) that assess the null hypotheses that the Intercept (Constant) = 0 and that the slope = 0. Essentially, the test for the slope is the same as the F-ratio seen above for the regression (i.e., same Sig. value). Chs. 15 & 16: Correlation & Regression - 14

The ANOVA table is actually a test of the significance of the correlation, so if the Sig. (p) <.05, then you would reject H 0 : ρ = 0. Compare the Sig. value above to the Sig. value earlier from the correlation analysis (both.031). Note that you still don t have a scattergram. L Here s how to squeeze one out of SPSS. Under Analyze, choose Regression->Curve Estimation. That will produce the window below right. Note that I ve moved the First Exam variable to the Independent variable and the Final Grade variable to the Dependent(s). Clicking on the OK button will produce the summary information and scattergram seen below. Chs. 15 & 16: Correlation & Regression - 15

According to Milton Rokeach, there is a positive correlation between dogmatism and anxiety. Dogmatism is defined as rigidity of attitude that produces a closed belief system (or a closed mind) and a general attitude of intolerance. In the following study, dogmatism was measured on the basis of the Rokeach D scale (Rokeach, 1960), and anxiety is measured on the 30-item Welch Anxiety Scale, an adaptation taken from the MMPI (Welch, 1952). A random sample of 30 undergraduate students from a large western university was selected and given both the D scale and the Welch Anxiety test. The data analyses are as seen below. Explain what these results tell you about Rokeach s initial hypothesis. Do you find these results compelling in light of the hypothesis? If a person received a score of 220 on the D-Scale, what would you predict that that person would receive on the Anxiety Test? Suppose that a person received a score of 360 on the D-Scale, what would you predict that that person would receive on the Anxiety Test? Chs. 15 & 16: Correlation & Regression - 16

Dr. Susan Mee is interested in the relationship between IQ and Number of Siblings. She is convinced that a "dilution of intelligence" takes place as siblings join a family (person with no siblings grew up interacting with two adult parents, person with one sibling grew up interacting with two adults+youngster, etc.), leading to a decrease in the IQ levels of children from increasingly larger families. She collects data from fifty 10-year-olds who have 0, 1, 2, 3, or 4 siblings and analyzes her data with SPSS, producing the output seen below. Interpret the output as completely as you can and tell Dr. Mee what she can reasonably conclude, given her original hypothesis. What proportion of the variability in IQ is shared with Number of Siblings? If a person had 3 siblings, what would be your best prediction of that person's IQ? What about 5 siblings? On the basis of this study, would you encourage Dr. Mee to argue in print that Number of Siblings has a causal impact on IQ? Why or why not? Chs. 15 & 16: Correlation & Regression - 17

Dr. Upton Reginald Toaste conducted a study to determine the relationship between motivation and performance. He obtained the data seen below (with the accompanying StatView analyses). What kind of relationship should he claim between motivation and performance, based on the analyses? How would you approach interpreting this set of data? If someone had motivation of 4, what would you predict for a level of performance? [10 pts] Chs. 15 & 16: Correlation & Regression - 18

In PS 306 (something to look forward to), we collected a number of different academic measures. Below are the results from a correlation analysis of two different SAT scores (Math and Verbal/Critical Reading). First of all, tell me what you could conclude from these results. Then, given an SAT-V score of 600, what SAT-M score would you predict using the regression equation? Given the observed correlation, if a person studied only for the SAT-V and raised her or his SAT-V score, would you expect that person s SAT-M score to increase as well? What would you propose as the most likely source of the observed relationship? Chs. 15 & 16: Correlation & Regression - 19

Because StatView is used on many old exams, here s an example of StatView output. Studies have suggested that the stress of major life changes is related to subsequent physical illness. Holmes and Rahe (1967) devised the Social Readjustment Rating Scale (SRRS) to measure the amount of stressful change in one s life. Each event is assigned a point value, which measures its severity. For example, at the top of the list, death of a spouse is assigned 100 life change units (LCU). Divorce is 73 LCUs, retirement is 45, change of career is 36, the beginning or end of school is 26, and so on. The more life change units one has accumulated in the past year, the more likely he or she is to have an illness. The following StatView analyses show the results from a hypothetical set of data. Interpret these results as completely as you can. For these data, if a person had accumulated 100 LCUs, how many doctor visits would you predict? If a person had accumulated 400 LCUs, how many doctor visits would you predict? Regression Summary Doctor Visits vs. LCU Total Count Num. Missing R R Squared Adjusted R Squared RMS Residual 15 0.637.406.360 2.135 Doctor Visits 12 10 8 6 4 2 Regression Plot ANOVA Table Doctor Visits vs. LCU Total Regression Residual Total DF Sum of Squares Mean Square F-Value P-Value 1 40.469 40.469 8.877.0107 13 59.264 4.559 14 99.733 0 0 50 100 150 200 250 300 LCU Total Y = 2.995 +.026 * X; R^2 =.406 Regression Coefficients Doctor Visits vs. LCU Total Intercept LCU Total Coefficient Std. Error Std. Coeff. t-value P-Value 2.995.996 2.995 3.006.0101.026.009.637 2.979.0107 Chs. 15 & 16: Correlation & Regression - 20

The Spearman Correlation Most of the statistics that we ve been using are called parametric statistics. That is, they assume that the data are measured at an interval or ratio level of measurement. There is another class of statistics called nonparametric statistics that were developed to allow statistical interpretation of data that may not be measured at the interval or ratio level. In fact, Spearman s rho was developed to allow a person to measure the correlation of data that are ordinal. The computation of the Spearman correlation is identical to the computation of the Pearson correlation. The only difference is that the data that go into the formula must all be ordinal when computing the Spearman correlation. Thus, if the data are not ordinal to begin with, you must convert the data to ranks before computing the Spearman correlation. Generally speaking, you would use the Pearson correlation if the data were appropriate for that statistic. But, if you thought that the relationship would not be linear, you might prefer to compute the Spearman correlation coefficient, rho. First, let s analyze the data below using the Pearson correlation coefficient, r. X Y 1 1 2 2 3 2 4 4 5 16 6 15 7 32 8 128 9 256 10 512 Using the formula for r, as seen below, we would obtain a value of.78. r = ( )( 968) 8869 55 10 82.5 ( )( 251891.6) = 3545 4558.6 =.78 With df = 8, r Crit =.632, so you would reject H 0 and conclude that there is a significant linear relationship between the two variables. Chs. 15 & 16: Correlation & Regression - 21

However, if you look at a graph of the data, you should see that the data are not really linear, even though there is a linear trend. 600 Regression Plot 500 400 Y 300 200 100 0-100 0 1 2 3 4 5 6 7 8 9 10 11 X Y = -139.533 + 42.97 * X; R^2 =.605 If we were to convert these data to ranks, you could then compute the Spearman correlation coefficient. Unfortunately, there are tied values, so we need to figure out a way to deal with ties. First, rank all the scores, giving different ranks to the two scores that are the same (indicated with a bold box below on the left). First step in ranking X Y 1 1 2 2 3 3 4 4 5 6 6 5 7 7 8 8 9 9 10 10 Second step in ranking X Y 1 1 2 2.5 3 2.5 4 4 5 6 6 5 7 7 8 8 9 9 10 10 Next take the average of the ranks (2.5) and assign that value to the identical ranks (as seen above on the right). The computation of the Spearman correlation coefficient is straightforward once you have the ranked scores. Simply use the ranked values in the formula for the Pearson correlation coefficient and you will have computed the Spearman correlation. ( )( 55) 383.5 55 r = 10 82.5 ( )( 82) = 81 82.24 =.985 Chs. 15 & 16: Correlation & Regression - 22

To determine if the Spearman correlation is significant, you need to look in a new table, as seen below. With n = 10, r Crit =.648, so there would be a significant correlation in this case. If you would like to learn a different formula for the Spearman correlation coefficient, you can use the one below: ( ) r s =1 6 D2 n n 2 1 D is the difference between the ranked values. Thus, you could compute the Spearman correlation as seen below: Chs. 15 & 16: Correlation & Regression - 23

X Y D D 2 1 1 0 0 2 2.5 -.5.25 3 2.5.5.25 4 4 0 0 5 6-1 1 6 5 1 1 7 7 0 0 8 8 0 0 9 9 0 0 10 10 0 0 ( ) r s =1 6 D2 n n 2 1 =1 15 990 =.985 Obviously, you obtain the same value for the Spearman correlation using this new formula. Although either formula would work equally well, I do think that there is some value in parsimony, so I would simply use the Pearson correlation formula for ranked scores to obtain the Spearman correlation. On the other hand, you might simply use SPSS to compute the Spearman correlation. When you do, you don t even need to enter the data as ranks, SPSS will compute the Spearman correlation on unranked data (and make the conversion to ranked data for you). Thus, if you enter the original data and choose Analyze -> Correlate -> Bivariate, you will see the window below left. Note that I ve checked the Spearman box and moved both variables from the left to the Variables window on the right. The results would appear as in the table on the right, with the value of Spearman s rho =.985 (thank goodness!). Chs. 15 & 16: Correlation & Regression - 24