Preview from Notesale.co.uk Page 3 of 63

Similar documents
Multiple t Tests. Introduction to Analysis of Variance. Experiments with More than 2 Conditions

Comparing Several Means: ANOVA

Analysis of Variance (ANOVA)

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics

Independent Samples ANOVA

Using SPSS for One Way Analysis of Variance

Difference in two or more average scores in different groups

Analysis of Variance: Repeated measures

COMPARING SEVERAL MEANS: ANOVA

Contents. Acknowledgments. xix

Introduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p.

Sampling distribution of t. 2. Sampling distribution of t. 3. Example: Gas mileage investigation. II. Inferential Statistics (8) t =

An inferential procedure to use sample data to understand a population Procedures

Factorial Independent Samples ANOVA

Workshop Research Methods and Statistical Analysis

Review of Statistics 101

ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS

One-way between-subjects ANOVA. Comparing three or more independent means

The One-Way Repeated-Measures ANOVA. (For Within-Subjects Designs)

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

One-way between-subjects ANOVA. Comparing three or more independent means

Multiple Comparisons

Dover- Sherborn High School Mathematics Curriculum Probability and Statistics

Background to Statistics

Transition Passage to Descriptive Statistics 28

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs

The One-Way Independent-Samples ANOVA. (For Between-Subjects Designs)

Analysis of variance (ANOVA) Comparing the means of more than two groups

Group comparison test for independent samples

The t-test: A z-score for a sample mean tells us where in the distribution the particular mean lies

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

Module 8: Linear Regression. The Applied Research Center

Statistical. Psychology

Analysis of Covariance. The following example illustrates a case where the covariate is affected by the treatments.

Introduction to the Analysis of Variance (ANOVA)

Multiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means

ESP 178 Applied Research Methods. 2/23: Quantitative Analysis

2 and F Distributions. Barrow, Statistics for Economics, Accounting and Business Studies, 4 th edition Pearson Education Limited 2006

26:010:557 / 26:620:557 Social Science Research Methods

PSYC 331 STATISTICS FOR PSYCHOLOGISTS

Review for Final. Chapter 1 Type of studies: anecdotal, observational, experimental Random sampling

Parametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami

Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti

y ˆ i = ˆ " T u i ( i th fitted value or i th fit)

10/4/2013. Hypothesis Testing & z-test. Hypothesis Testing. Hypothesis Testing

Business Statistics. Lecture 10: Course Review

Extensions of One-Way ANOVA.

2 Hand-out 2. Dr. M. P. M. M. M c Loughlin Revised 2018

What Does the F-Ratio Tell Us?

16.400/453J Human Factors Engineering. Design of Experiments II

16.3 One-Way ANOVA: The Procedure

Extensions of One-Way ANOVA.

SEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics

THE ROYAL STATISTICAL SOCIETY 2015 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 3

Analysis of Variance (ANOVA)

4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES

Unit 27 One-Way Analysis of Variance

Analysis of variance

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES

Psychology 282 Lecture #4 Outline Inferences in SLR

Research Design - - Topic 19 Multiple regression: Applications 2009 R.C. Gardner, Ph.D.

Data analysis and Geostatistics - lecture VII

Inferences About the Difference Between Two Means

Multiple Linear Regression II. Lecture 8. Overview. Readings

Multiple Linear Regression II. Lecture 8. Overview. Readings. Summary of MLR I. Summary of MLR I. Summary of MLR I

Statistical Methods. by Robert W. Lindeman WPI, Dept. of Computer Science

Statistics: revision

BIOMETRICS INFORMATION

A posteriori multiple comparison tests

Review. One-way ANOVA, I. What s coming up. Multiple comparisons

Self-Assessment Weeks 8: Multiple Regression with Qualitative Predictors; Multiple Comparisons

Exam details. Final Review Session. Things to Review

Stat 529 (Winter 2011) Experimental Design for the Two-Sample Problem. Motivation: Designing a new silver coins experiment

Chapter 9: How can I tell if scores differ between three or more groups? One-way independent measures ANOVA.

Types of Statistical Tests DR. MIKE MARRAPODI

PSY 216. Assignment 12 Answers. Explain why the F-ratio is expected to be near 1.00 when the null hypothesis is true.

Simple Linear Regression for the Climate Data

psychological statistics

Chapter 1 Statistical Inference

Two-Sample Inferential Statistics

10/31/2012. One-Way ANOVA F-test


Population Variance. Concepts from previous lectures. HUMBEHV 3HB3 one-sample t-tests. Week 8

STA2601. Tutorial letter 203/2/2017. Applied Statistics II. Semester 2. Department of Statistics STA2601/203/2/2017. Solutions to Assignment 03

Regression: Main Ideas Setting: Quantitative outcome with a quantitative explanatory variable. Example, cont.

Psych 230. Psychological Measurement and Statistics

Glossary for the Triola Statistics Series

DESIGNING EXPERIMENTS AND ANALYZING DATA A Model Comparison Perspective

Mathematical Notation Math Introduction to Applied Statistics

Introduction to Analysis of Variance. Chapter 11

Hypothesis Testing and Confidence Intervals (Part 2): Cohen s d, Logic of Testing, and Confidence Intervals

FRANKLIN UNIVERSITY PROFICIENCY EXAM (FUPE) STUDY GUIDE

Final Exam - Solutions

same hypothesis Assumptions N = subjects K = groups df 1 = between (numerator) df 2 = within (denominator)

Ordinary Least Squares Regression Explained: Vartanian

Self-Assessment Weeks 6 and 7: Multiple Regression with a Qualitative Predictor; Multiple Comparisons

Hypothesis testing: Steps

Stats Review Chapter 14. Mary Stangler Center for Academic Success Revised 8/16

Transcription:

Stem-and-leaf diagram - vertical numbers on far left represent the 10s, numbers right of the line represent the 1s The mean should not be used if there are extreme scores, or for ranks and categories Unbiased estimator - The best guess which is just as likely to be too high as too low, if obtained repeatedly, on average it will be correct Lecture 3 Density - An alternative vertical scale for histograms, which ensures that the total area covered by the histogram is 1 square unit (Initial area = 10 x 10 = 100, new area = 10 x 0.1 = 1, convert the vertical scores on the scale by dividing each one by the value of the initial area) The area for a region of a histogram can be used to tell you the proportion of scores in that region The normal distribution - Bell shaped curve, it is the idealised distribution, it is symmetrical, the mean, mode and median are all the same value, scores close to the mean are much more common, and become increasingly uncommon the further away from the mean, the proportion of scores above or below a certain value can be calculated when you know the mean and standard deviation of a normally Page 3 of 63 distributed variable. The approximate proportion of scores for a normally distributed variable lie within 1, 2, or 3 standard deviations of its mean. Standard normal distribution - As a result of the special properties of the normal distribution we can use standard deviations as a unit of measurement to describe how far above or below the mean a particular score is, +1 means 1 standard deviation greater than the mean (and so on...), These are called z-scores (a z-score of zero indicates the mean). Lecture 4 3

= = [ ] Or - Converting individual values into z-scores.. - The z-test of whether a sample mean differs significantly from a population mean How many standard errors separate the sample mean from the population mean One Sample t-test = [ ] Degrees of freedom = N - 1 Statistical Hypotheses - Logical process of performing an inferential test, null/alternative hypothesis is a statement about populations that could be true, given the null hypothesis, if the observed statistics are improbable we can reject the null in favour of the alternative hypothesis Study/Experimental/Research Hypotheses - Practical endeavours of empirical exploration, what would we expect to observe in a study if one particular theory is true, there are often competing hypothesis if there is more than one theory to test One-tailed/Two-tailed hypotheses Page 8 of 63 One-Tailed - Directional - Mean1 is bigger/smaller than Mean2 Two-Tailed - Non-directional - Mean1 does not equal Mean2 Two-tailed should be used as often as possible, a directional hypothesis should not require a two-tailed test, Lecture 13 Independent Samples - Two separate groups, no one is in both groups Related Samples - One group in both conditions, each participant has a direct counterpart in both groups To decide between the two - Consider two samples A and B, write A in a list, then B, if the order you put B in a list depends on the order of A, then the sample is related 8

Factors that affect the Power i. The probability of a Type 1 error ii. The true difference between the Null hypothesis and the actual state of affairs iii. The sample size and population variance or SD iv. One-tailed/Two-tailed v. Parametric/Non-parametric vi. Paired t-test is more powerful than independent t-tests vii. Correlation of the two groups in a paired t-test Measures of Effect Size Mean Difference - With very large samples we can reliably detect small differences The Linear Relationship Between Two Variables - With very large samples we can reliably detect small relationships Cohen's d - Estimated Effect Size - Converting d into t and r = + = + Phi - The measure for effect size for association for a 2-by-2 contingency table 0.1 - Small 0.3 - Medium 0.5 - Large Page 13 of 63 Effect Size and Significance are related Magnitude of a Significance Test = Effect Size X Size of Study Lecture 19 Confidence Interval - A range estimate that is commonly used in Psychological research, to communicate the degree of uncertainty 13

=, = = + = = Between Group Variance: The variance of each mean from the grand mean = + Page 28 of 63 If each group is evenly sized, the equation can be rearranged thusly: = Σ + ΣA Σ Because: = We can say that: Within-Group Variance: = + 28

If you have two levels, and you know the grand mean one level is free to vary but once a value exists for that the remaining level becomes fixed to enable the grand mean to stay at its current value, hence we have a between group degrees of freedom of 1. Within group degrees of freedom: = If we have 10 participants in a sample and we know the mean, 9 scores are free to vary but once they gain a set value the remaining score's value has to be fixed to remain consistent with the value of the mean, hence we have a degrees of freedom of 9. If we have three levels, each with 10 participants, the the degrees of freedom becomes 9 + 9 + 9, hence we have a within groups degrees of freedom of 27. This because: Total degrees of Freedom: Mean Square and the F-ratio = + = + Page 31 of 63 The mean square is calculated by taking each Sum of Squares and dividing it by its corresponding degrees of freedom. Mean square is essentially the variance, mean square error: = / = / / mean square: The ratio for F is simply between subjects mean square divided by within-subjects 31

X ifluees Y Y ifluees X X ad Y ifluee eah othe A thid aiale ifluees oth X ad Y We can use MLR to examine the influence of variables while controlling for the others Variable X Variable Y Does X cause Y, or is there a mediator value Z, that is caused by X and causes Z. If Z correlates with both, this provides support for this idea. We can use multiple linear regression to control for variable Z, to investigate the relationship between X and Y, without the possible mediating effect of Z (with those who have similar levels of Z, do higher levels of X still predict higher levels of Y). Possible relations: ) ould e the sole ause ) ould e a patial ause ) ould e a uelated cause If Z is the sole cause, when we hold constant there should be no relationship between X and Y, if Z is a partial cause when we hold constant the relationship between X and Y should be reduced. Steps: Page 39 of 63 1. In initial regression X was found to be a predictor of Y 2. Enter Z into the regression 3. If beta weight now drops, but remains significant - There is a shared variance R 2 Change gives us an indication of how much of the variance is accounted for before and after controlling for a variable F Change gives you an indication of the change in significance when accounting for a specific variable Hierarchical Regression - Automated way of controlling for variables in a number of stages, Indirect Mediator - X affects Z, Z affects Y, manifests originally as X affecting Y Partial Mediator - X affects Y a small amount, X also affects Z, Z affects Y, both predictors affect Y, but if only one is investigated it may appear direct Baron and Kenny (1986) - 1. Significant relationship between IV and mediator (a) 39

S/A Total Mean Square Error Within - Σ Σ +Σ Mean Square Error is the within subjects mean square, to work out the mean square divide sum of squares by degrees of freedom, to work out F ratio divide betweensubjects mean square and the mean square error. df A = Number of Levels - 1 df s/a = Number of Levels x (Number of PPs in level A) Single Factor Within-Subjects ANOVA Between - Σ +Σ Source A S AxS Total Sum of Squares Σ Subjects - Σ +Σ Degrees of Freedom Mean Square Mean Square Error Σ Page 47 of 63 F p Residual - Σ Σ +Σ df A = Number of levels - 1 df S = Number of Scores - 1 Σ +Σ + Σ df AXS = (Number of Levels - 1) x (Number of Score - 1) Two Factor Fully Between-Subjects ANOVA Between Factor A - Σ +Σ Σ Between Factor B - Σ +Σ Σ 47

Newman-Keuls Test - Protected, hence F value must be significant, requires 5 or fewer means, cannot be used if means are identical t-test - Can be used as a post hoc test, has to be used with the Bonferroni adjustment to correct for familywise errors, very stringent and can lack statistical power, F does not need to be significant Planned Comparison - Uses the t-test model, only crucial comparisons needed should be performed, no need for significant F, when using few comparisons (number of levels - 1) test is powerful and requires no correction, when using many comparisons (greater than number of levels - 1) the Bonferroni adjustment should Scheffé - Most stringent of all, uses a t-test but corrects the critical value, used after planned comparisons Equations: Bonferroni -.5 Tukey - = h Scheffé - t-test - = Page 49 of 63 Newman-Keuls - N.B. Cannot be completed by hand Basic Notation [A] = Σ +Σ = + h [T] = Σ = [Y] = Σ = h [S] = Σ +Σ = + h 49

> Complete Fully Between-Subjects ANOVA > Effect Details > (IV1) * (IV2) > LSMeans Plot Stacking Data > Tables > Stack > Highlight (WSF1) and (WSF2) > Type file name of choice > Overtype Data with data name of choice > Overtype Label with factor name of choice > OK Mixed Design ANOVA > Stack Data > Analyse > Fit Model > Click (DV) in Y > Add (IV1) into Construct Model Effects > Add (IV2) into Construct Model Effects > Highlight (IV1) and (IV2) click Cross to add to Construct Model Effects > Highlight (PP), select Random Effect from Attributes > Change Emphasis to Minimal Report > Run Model Page 59 of 63 Fully-Between Subjects ANOVA with Three Factors > Fit Model > Click (DV) into Y > Add (IV1) into Construct Model Effects > Add (IV2) into Construct Model Effects > Add (IV3) into Construct Model Effect > Highlight both (IV1) and (IV2) > Highlight both (IV1) and (IV3) 59

> Run Model Fully within-subjects ANOVA with three factors > Fit Model > Click (DV) into Y > Add (IV1) into Construct Model Effects > Add (IV2) into Construct Model Effects > Add (IV3) into Construct Model Effects > Add (PP) into Construct Model Effects > Highlight both (IV1) and (IV2) > Highlight both (IV1) and (IV3) > Highlight both (IV2) and (IV3) > Highlight (IV1), (IV2) & (IV3) > Highlight (PP) > Random Effects from Attributes > Highlight both (PP&R) and (IV1) >Highlight both (PP&R) and (IV2) > Highlight both (PP&R) and (IV3) Page 62 of 63 62