Linear Regression. Linear Regression. Linear Regression. Did You Mean Association Or Correlation?

Similar documents
Chapter 8. Linear Regression. Copyright 2010 Pearson Education, Inc.

Chapter 8. Linear Regression. The Linear Model. Fat Versus Protein: An Example. The Linear Model (cont.) Residuals

appstats8.notebook October 11, 2016

1. Create a scatterplot of this data. 2. Find the correlation coefficient.

STA Module 5 Regression and Correlation. Learning Objectives. Learning Objectives (Cont.) Upon completing this module, you should be able to:

Chapter 7 Linear Regression

Chapter 8. Linear Regression /71

Announcement. Student Opinion of Courses and Teaching (SOCT) forms will be available for you during the dates indicated:

Chapter 27 Summary Inferences for Regression

appstats27.notebook April 06, 2017

Warm-up Using the given data Create a scatterplot Find the regression line

AP Statistics. Chapter 6 Scatterplots, Association, and Correlation

AP Statistics. Chapter 9 Re-Expressing data: Get it Straight

The Whopper has been Burger King s signature sandwich since 1957.

Chapter 7 Summary Scatterplots, Association, and Correlation

Chapter 3: Examining Relationships

Chapter 7. Scatterplots, Association, and Correlation. Copyright 2010 Pearson Education, Inc.

Chapter 6. September 17, Please pick up a calculator and take out paper and something to write with. Association and Correlation.

Inferences for Regression

Sampling Distribution Models. Chapter 17

Chapter 2: Looking at Data Relationships (Part 3)

Describing Bivariate Relationships

Chapter 3: Describing Relationships

AP Statistics Unit 6 Note Packet Linear Regression. Scatterplots and Correlation

AP STATISTICS Name: Period: Review Unit IV Scatterplots & Regressions

Chapter 3: Describing Relationships

Chapter 7. Scatterplots, Association, and Correlation

Relationships Regression

Review. Midterm Exam. Midterm Review. May 6th, 2015 AMS-UCSC. Spring Session 1 (Midterm Review) AMS-5 May 6th, / 24

5.1 Bivariate Relationships

AP Statistics. The only statistics you can trust are those you falsified yourself. RE- E X P R E S S I N G D A T A ( P A R T 2 ) C H A P 9

Determine is the equation of the LSRL. Determine is the equation of the LSRL of Customers in line and seconds to check out.. Chapter 3, Section 2

Chapter 6 The Standard Deviation as a Ruler and the Normal Model

Chapter 18. Sampling Distribution Models. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Unit 6 - Introduction to linear regression

3.2: Least Squares Regressions

Scatterplots and Correlation

Conditions for Regression Inference:

7. Do not estimate values for y using x-values outside the limits of the data given. This is called extrapolation and is not reliable.

Chapter 14. Statistical versus Deterministic Relationships. Distance versus Speed. Describing Relationships: Scatterplots and Correlation

Looking at data: relationships

Business Statistics. Lecture 9: Simple Regression

Unit 6 - Simple linear regression

Finite Mathematics : A Business Approach

Chapter 5 Simplifying Formulas and Solving Equations

AP Statistics L I N E A R R E G R E S S I O N C H A P 7

y = a + bx 12.1: Inference for Linear Regression Review: General Form of Linear Regression Equation Review: Interpreting Computer Regression Output

BIVARIATE DATA data for two variables

Correlation and regression

Mr. Stein s Words of Wisdom

Interpret Standard Deviation. Outlier Rule. Describe the Distribution OR Compare the Distributions. Linear Transformations SOCS. Interpret a z score

Stat 101 Exam 1 Important Formulas and Concepts 1

, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1

Chapter 7. Linear Regression (Pt. 1) 7.1 Introduction. 7.2 The Least-Squares Regression Line

Announcements. Lecture 10: Relationship between Measurement Variables. Poverty vs. HS graduate rate. Response vs. explanatory

Important note: Transcripts are not substitutes for textbook assignments. 1

Related Example on Page(s) R , 148 R , 148 R , 156, 157 R3.1, R3.2. Activity on 152, , 190.

Information Sources. Class webpage (also linked to my.ucdavis page for the class):

Bivariate Data Summary

Chi-square tests. Unit 6: Simple Linear Regression Lecture 1: Introduction to SLR. Statistics 101. Poverty vs. HS graduate rate

Chapter 3: Examining Relationships

Linear Regression and Correlation. February 11, 2009

The Simple Linear Regression Model

Correlation & Simple Regression

HOLLOMAN S AP STATISTICS BVD CHAPTER 08, PAGE 1 OF 11. Figure 1 - Variation in the Response Variable

LECTURE 15: SIMPLE LINEAR REGRESSION I

Chapter 5 Friday, May 21st

Chapter 12 Summarizing Bivariate Data Linear Regression and Correlation

THE PEARSON CORRELATION COEFFICIENT

Lecture 18: Simple Linear Regression

BIOSTATISTICS NURS 3324

Objectives. 2.3 Least-squares regression. Regression lines. Prediction and Extrapolation. Correlation and r 2. Transforming relationships

9. Linear Regression and Correlation

Nov 13 AP STAT. 1. Check/rev HW 2. Review/recap of notes 3. HW: pg #5,7,8,9,11 and read/notes pg smartboad notes ch 3.

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

3.1 Scatterplots and Correlation

UNIT 12 ~ More About Regression

Regression and correlation. Correlation & Regression, I. Regression & correlation. Regression vs. correlation. Involve bivariate, paired data, X & Y

Ch Inference for Linear Regression

MATH 1070 Introductory Statistics Lecture notes Relationships: Correlation and Simple Regression

Linear Regression Communication, skills, and understanding Calculator Use

Can you tell the relationship between students SAT scores and their college grades?

Chapter 10-Regression

Math 243 OpenStax Chapter 12 Scatterplots and Linear Regression OpenIntro Section and

where Female = 0 for males, = 1 for females Age is measured in years (22, 23, ) GPA is measured in units on a four-point scale (0, 1.22, 3.45, etc.

Review of Regression Basics

Announcements. Lecture 18: Simple Linear Regression. Poverty vs. HS graduate rate

Business Statistics. Lecture 10: Course Review

INFERENCE FOR REGRESSION

Intro to Linear Regression

The following formulas related to this topic are provided on the formula sheet:

Least Squares Regression

Correlation and Regression

POL 681 Lecture Notes: Statistical Interactions

Chapter 6: Exploring Data: Relationships Lesson Plan

Chapter 10. Regression. Understandable Statistics Ninth Edition By Brase and Brase Prepared by Yixun Shi Bloomsburg University of Pennsylvania

Stat 135, Fall 2006 A. Adhikari HOMEWORK 10 SOLUTIONS

Algebra & Trig Review

1 Measurement Uncertainties

Chapter Three. Deciphering the Code. Understanding Notation

Transcription:

Did You Mean Association Or Correlation? AP Statistics Chapter 8 Be careful not to use the word correlation when you really mean association. Often times people will incorrectly use the word correlation when talking about relationships in order to sound scientific. However, associations just describe a general relationship between two variable whereas correlations specifically describes the linear relationship between the two variables if any. Always Check Your Conditions The conditions for correlation: The variables must be numerical. People who misuse correlation to mean association often fail to notice whether the variables they discuss are quantitative The association is linear. Correlations only describe linear associations No outliers. Outliers can drastically change your data. Always be aware of any points that may sway your data. It would be great to be able to look at multi-variable data and reduce it to a single equation that might help us make predictions Given the data of tuition at Arizona State University during the 1990 s, can you predict the tuition in 2002? Let s take this step by step to see how to perform a linear regression Make a new list labeled Year and Tuit (for tuition) Then, input the following data into your calculator Year Tuition Year Tuition 1990 6546 1996 8377 1991 6996 1997 8710 1992 6996 1998 9110 1993 7350 1999 9411 1994 7500 2000 9800 1995 7978 Next, check your conditions. Are the variables quantitative? Does the data look somewhat linear? Are there outliers?

Now, let s calculate the linear regression line TI-84 TI-89 The Y 1 variable automatically inputs the Least-Squared Regression Line (also called the LSRL) into the Y 1 function in your calculator: The LSRL finds the best fit line by trying to minimize the areas formed by the difference of the real data from the predicted data. The LSRL helps us make predictions and creates a line that best fits the data. It is called the Least Squares Regression Line because it is the ONE line that has the smallest Least Squares Error it gives the smallest sum of squared deviation. The LSRL equation that we received from the Arizona Tuition problem was: yˆ 642,463 326. 08x The LSRL helps us make predictions and creates a line that best fits the data. What is the y-intercept of the line? a = -642463 What is the slope of the line? b = 326.08 The LSRL equation that we received from the Arizona Tuition problem was: yˆ 642,463 326. 08x What does the y-intercept represent? It represents the tuition at year 0 Does the y-intercept make sense in the context of this problem? No, since at year 0 was during Jesus time, it doesn t make sense to speak of the tuition of Arizona State during this time frame! Plus, it means they would pay you to attend!!! What does the slope represent? It represents the amount of money that tuition will raise for every increase of 1 year. In this example, the model predicts that tuition will raise $326.08 every year at Arizona State. Note: when asked about the y-intercept (a) and the slope (b), you should memorize this phrase: y-intercept (a): at an (explanatory variable) value of 0 (units), our model predicts a (response variable) of (y y units). Always ask if this makes sense!!! Slope (b): for every (11 unit) increase in the (explanatory variable), our model predicts an average (increase/decrease) of (y y units) in the (response variable). Let s apply these phrases with our Arizona State example y-intercept (a): at year 0, our model predicts a tuition of -$642,463. This makes no sense at all!!! Slope (b): for every 1 year increase, our model predicts an average increase of $326.08 in the tuition.

The LSRL equation that we received from the Arizona Tuition problem was: yˆ 642,463 326. 08x Using this formula, what is was the approximate tuition in 1989? $6113.87 Using this formula, what is was the approximate tuition in 2001? $10,026.90 Using this formula, what is was the approximate tuition in 2011? $13,287.70 The actual tuition in 2011 is $9720, why the difference? Extrapolation Extrapolation is using a model to make predictions outside the domain of the data. It is very unreliable since the pattern of the data may not stay the say when you go beyond the given data. Always be wary of extrapolation when you are predicting a y-value outside of the given data The Linear Model The linear model is just an equation of a straight line through the data. The points in the scatterplot don t all line up, but a straight line can summarize the general pattern with only a couple of parameters. The linear model can help us understand how the values are associated. The model won t be perfect, regardless of the line we draw. Some points will be above the line and some will be below. The estimate made from a model is the predicted value (denoted as ŷ - called y-hat ). Best Fit Means Least Squares Some residuals are positive, others are negative, and, on average, they cancel each other out. So, we can t assess how well the line fits by adding up all the residuals. Similar to what we did with deviations, we square the residuals and add the squares. The smaller the sum, the better the fit. The line of best fit is the line for which the sum of the squared residuals is smallest, the least squares line. The Regression Line in Real Units Remember from Algebra that a straight line can be written as: y mx b In Statistics we use a slightly different notation: ŷ b 0 b 1 x yˆ a bx or We write ŷ to emphasize that the points that satisfy this equation are just our predicted values, not the actual data values. This model says that our predictions from our model follow a straight line. If the model is a good one, the data values will scatter closely around it. The Regression Line in Real Units(cont.) We write b 1 for the slope and b 0 for the y- intercept of the line. b 1 is the slope, which tells us how rapidly changes with respect to x. b 0 is the y-intercept, which tells where the line crosses (intercepts) the y-axis. ŷ

Fat Versus Protein: An Example The following is a scatterplot of total fat versus protein for 30 items on the Burger King menu: The Linear Model The correlation in this example is 0.83. This seems to say that there is a relatively strong, positive linear association between these two variables. When we create the least-squared regression line, we can say more about the linear relationship between two quantitative variables with a model. A model simplifies reality to help us understand underlying patterns and relationships. Residuals The difference between the observed value and its associated predicted value is called the residual. To find the residuals, we always subtract the predicted value from the observed one: residual observed predicted y ŷ Residuals A negative residual means the predicted value s too big (an overestimate). A positive residual means the predicted value s too small (an underestimate). In the figure, the estimated fat of the BK Broiler chicken sandwich is 36 g, while the true value of fat is 25 g, so the residual is 11 g of fat. The Regression Line in Real Units (cont.) In our model, we have a slope (b 1 ): The slope is built from the correlation and the standard deviations: b 1 r s y s x Our slope is always in units of y per unit of x. The Regression Line in Real Units (cont.) In our model, we also have an intercept (b 0 ). The intercept is built from the means and the slope: b 0 y b 1 x Our intercept is always in units of y.

Fat Versus Protein: An Example The regression line for the Burger King data fits the data well: The equation is The predicted fat content for a BK Broiler chicken sandwich (with 30 g of protein) is 6.8 + 0.97(30) = 35.9 grams of fat. Residuals Revisited The linear model assumes that the relationship between the two variables is a perfect straight line. The residuals are the part of the data that hasn t been modeled. Data = Model + Residual or (equivalently) Residual = Data Model Or, in symbols, e y ŷ Residuals Revisited (cont.) Residuals help us to see whether the model makes sense. A residual is the vertical distance from the point to the line. A residual plot gives us a closer look at the pattern of the residuals. When a regression model is appropriate, nothing interesting should be left behind. After we fit a regression model, we usually plot the residuals in the hope of finding a random scatter. Residuals Revisited (cont.) The residuals for the BK menu regression look appropriately boring: The Residual Standard Deviation The standard deviation of the residuals, s e, measures how much the points spread around the regression line. Check to make sure the residual plot has about the same amount of scatter throughout. Check the Equal Variance Assumption with the Does the Plot Thicken? Condition. We estimate the SD of the residuals using: s e e 2 n 2 The Residual Standard Deviation We don t need to subtract the mean because the mean of the residuals e 0. Make a histogram or normal probability plot of the residuals. It should look unimodal and roughly symmetric. Then we can apply the 68-95-99.7 Rule to see how well the regression model describes the data.

R 2 The Variation Accounted For The variation in the residuals is the key to assessing how well the model fits. In the BK menu items example, total fat has a standard deviation of 16.4 grams. The standard deviation of the residuals is 9.2 grams. R 2 The Variation Accounted For (cont.) Since the correlation (r) is 0.83 (not perfect) we cannot perfectly predict fat from protein. However, we can determine how much of the variation is accounted for by the model and how much is left in the residuals. The squared correlation, r 2 called the Coefficient of Determination, gives the fraction of the data s variance accounted for by the model. If r 2 = 1.0, the model would predict the fat values perfectly without error from protein. If r 2 = 0, fat could not be predicted from protein at all. R 2 The Variation Accounted For (cont.) r 2 is interpreted as the proportion of the variance in the dependent variable that is predictable from the independent variable. Thus, 1 r 2 is the fraction of the original variance left in the residuals. Note: when asked about r 2, you should memorize this phrase: r 2 : (x) percent of the variation in the (response variable) can be explained by the approximate linear relationship with the (explanatory variable). Let s apply this phrase with our Burger King example R 2 The Variation Accounted For (cont.) Apply the phrase: (x) percent of the variation in the (response variable) can be explained by the linear relationship with the (explanatory variable). For the BK model, r = 0.83 and r 2 = 0.83 2 = 0.69, so 69% of the variation in total fat can be explained by the linear relationship with protein. Always try to understand what we are talking about and don t get lost in the wording. 69% of the variability is accounted for by the Linear Regression Line and 31% of the variability in total fat has been left in the residuals. R 2 The Variation Accounted For (cont.) All regression analyses include this statistic, although by tradition, it is written R 2 or r 2 (pronounced r-squared ). r 2 is always between 0% and 100%. What makes a good r 2 value depends on the kind of data you are analyzing and on what you want to do with it. An r 2 between 0 and 1 indicates the extent to which the dependent variable is predictable. An r 2 of 0.10 means that 10 percent of the variance in Y is predictable from X; an r 2 of 0.20 means that 20 percent is predictable; and so on. Reporting R 2 Along with the slope, intercept, and correlation for a regression, you should always report r 2 so that readers can judge for themselves how successful the regression is at fitting the data. Statistics is about variation, and r 2 measures the success of the regression model in terms of the fraction of the variation of y accounted for by the regression model.

SAT Scores vs. College GPA SAT 1290 1335 1425 1470 1530 1665 1770 1800 1800 2085 GPA 2.8 2.7 3.2 3.1 3.0 3.2 3.6 3.8 3.6 4.0 Is it reasonable to perform linear regression on this data? How do you know? Yes, it is reasonable since we satisfy our 3 conditions: the data is quantitative, the scatter is linear, and there are no outliers. SAT Scores vs. College GPA SAT 1290 1335 1425 1470 1530 1665 1770 1800 1800 2085 GPA 2.8 2.7 3.2 3.1 3.0 3.2 3.6 3.8 3.6 4.0 Looking at the residuals, is it still appropriate to perform linear regression? Yes, it is appropriate to perform linear regression since the residuals show a random scatter about the line. SAT Scores vs. College GPA SAT 1290 1335 1425 1470 1530 1665 1770 1800 1800 2085 GPA 2.8 2.7 3.2 3.1 3.0 3.2 3.6 3.8 3.6 4.0 Describe the association. There is a strong, positive, linear association between SAT score and GPA. The correlation is 0.948 indicating that the strength is very strong. Describe the variation. The variation can be analyzed by looking at r 2. Since r = 0.948, r 2 = 0.899. So, 89.9% of the variation in the college GPA can be explained by the linear relationship with SAT score. SAT Scores vs. College GPA SAT 1290 1335 1425 1470 1530 1665 1770 1800 1800 2085 GPA 2.8 2.7 3.2 3.1 3.0 3.2 3.6 3.8 3.6 4.0 If a student has an SAT score of 2000, what would you predict his college GPA will be? Approximately 3.93 If a student has an SAT score of 2350, what would you predict his college GPA will be? Approximately 4.50 Does this seem reasonable? No, in college, you can only get a 4.0 What went wrong? Why? Assumptions and Conditions Before performing any linear regression, check the conditions: Quantitative Variables Condition: Straight Enough Condition: If the scatterplot is not straight enough, stop here. You can t use a linear model for any two variables, even if they are related. They must have a linear association or the model won t mean a thing. Some nonlinear relationships can be saved by reexpressing the data to make the scatterplot more linear. Outlier Condition: Assumptions and Conditions (cont.) It s a good idea to check linearity again after computing the regression when we can examine the residuals. Equal Variance Condition: Check to see if the plot thickens! Look at the residual plot -- for the standard deviation of the residuals to summarize the scatter, the residuals should share the same spread. Check for changing spread in the residual scatterplot.

Reality Check: Is the Regression Reasonable? Statistics don t come out of nowhere. They are based on data. The results of a statistical analysis should reinforce your common sense, not fly in its face. If the results are surprising, then either you ve learned something new about the world or your analysis is wrong. When you perform a regression, think about the coefficients and ask yourself whether they make sense. What Can Go Wrong? Beware extraordinary points (y-values that stand off from the linear pattern or extreme x-values). Don t extrapolate beyond the data the linear model may no longer hold outside of the range of the data. Don t infer that x causes y just because there is a good linear model for their relationship association is not causation. Don t choose a model based on R 2 alone. What have we learned? The residuals also reveal how well the model works. If a plot of the residuals against predicted values shows a pattern, we should re-examine the data to see why. The standard deviation of the residuals quantifies the amount of scatter around the line.