Practical Meta-Analysis -- Lipsey & Wilson

Similar documents
One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

7.2 One-Sample Correlation ( = a) Introduction. Correlation analysis measures the strength and direction of association between

Review of Multiple Regression

Review of Statistics 101

CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)

The goodness-of-fit test Having discussed how to make comparisons between two proportions, we now consider comparisons of multiple proportions.

Calculating Effect-Sizes. David B. Wilson, PhD George Mason University

Lecture 7: Hypothesis Testing and ANOVA

MS&E 226: Small Data

Sociology 6Z03 Review II

Simple logistic regression

Inverse-variance Weighted Average

Analysis of Variance

CHI SQUARE ANALYSIS 8/18/2011 HYPOTHESIS TESTS SO FAR PARAMETRIC VS. NON-PARAMETRIC

CAMPBELL COLLABORATION

The One-Way Repeated-Measures ANOVA. (For Within-Subjects Designs)

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics

EPSE 594: Meta-Analysis: Quantitative Research Synthesis

Regression Analysis. BUS 735: Business Decision Making and Research. Learn how to detect relationships between ordinal and categorical variables.

LECTURE 5 HYPOTHESIS TESTING

y response variable x 1, x 2,, x k -- a set of explanatory variables

Testing Independence

Psychology 282 Lecture #4 Outline Inferences in SLR

DATA IN SERIES AND TIME I. Several different techniques depending on data and what one wants to do

Contents. Acknowledgments. xix

Correlation and the Analysis of Variance Approach to Simple Linear Regression

Using SPSS for One Way Analysis of Variance

Analysis of Variance: Part 1

Correlation. A statistics method to measure the relationship between two variables. Three characteristics

Introduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p.

McGill University. Faculty of Science MATH 204 PRINCIPLES OF STATISTICS II. Final Examination

Review for Final. Chapter 1 Type of studies: anecdotal, observational, experimental Random sampling

One-way between-subjects ANOVA. Comparing three or more independent means

Topic 1. Definitions

STA441: Spring Multiple Regression. This slide show is a free open source document. See the last slide for copyright information.

Categorical Data Analysis Chapter 3

Lecture 26: Chapter 10, Section 2 Inference for Quantitative Variable Confidence Interval with t

Inference for Regression Inference about the Regression Model and Using the Regression Line, with Details. Section 10.1, 2, 3

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.

Binary Logistic Regression

Model Estimation Example

Chapter 1 Statistical Inference

Poisson regression: Further topics

REVIEW 8/2/2017 陈芳华东师大英语系

AGEC 621 Lecture 16 David Bessler

A3. Statistical Inference Hypothesis Testing for General Population Parameters

Hypothesis Testing hypothesis testing approach

One-way between-subjects ANOVA. Comparing three or more independent means

Table 1: Fish Biomass data set on 26 streams

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

Can you tell the relationship between students SAT scores and their college grades?

PLS205 Lab 2 January 15, Laboratory Topic 3

Chap The McGraw-Hill Companies, Inc. All rights reserved.

The entire data set consists of n = 32 widgets, 8 of which were made from each of q = 4 different materials.

Tables Table A Table B Table C Table D Table E 675

Parameter Estimation, Sampling Distributions & Hypothesis Testing

Multiple Regression Analysis

Lecture 5: ANOVA and Correlation

STATISTICS 141 Final Review

CDA Chapter 3 part II

Inferences for Regression

Statistics Introductory Correlation

In the previous chapter, we learned how to use the method of least-squares

Course 4 Solutions November 2001 Exams

Experimental Design and Data Analysis for Biologists

Workshop on Statistical Applications in Meta-Analysis

FRANKLIN UNIVERSITY PROFICIENCY EXAM (FUPE) STUDY GUIDE

Multiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600

Correlation and Simple Linear Regression

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES

Confidence Intervals, Testing and ANOVA Summary

Textbook Examples of. SPSS Procedure

Design of Experiments. Factorial experiments require a lot of resources

Central Limit Theorem ( 5.3)

Finding Relationships Among Variables

ANOVA: Analysis of Variation

ANOVA Situation The F Statistic Multiple Comparisons. 1-Way ANOVA MATH 143. Department of Mathematics and Statistics Calvin College

Topic 13. Analysis of Covariance (ANCOVA) - Part II [ST&D Ch. 17]

Basic Statistical Analysis

CHAPTER 2: Assumptions and Properties of Ordinary Least Squares, and Inference in the Linear Regression Model

Dr. Junchao Xia Center of Biophysics and Computational Biology. Fall /1/2016 1/46

Parametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami

EXST Regression Techniques Page 1. We can also test the hypothesis H :" œ 0 versus H :"

Lecture 9: Linear Regression

13: Additional ANOVA Topics. Post hoc Comparisons

INTRODUCTION TO DESIGN AND ANALYSIS OF EXPERIMENTS

Logistic Regression Analysis

Statistical. Psychology

Small n, σ known or unknown, underlying nongaussian

Introducing Generalized Linear Models: Logistic Regression

Introduction to the Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA)

A Re-Introduction to General Linear Models (GLM)

ANOVA CIVL 7012/8012

Suppose that we are concerned about the effects of smoking. How could we deal with this?

Final Exam - Solutions

(Where does Ch. 7 on comparing 2 means or 2 proportions fit into this?)

Homework 2: Simple Linear Regression

Lecture 3: Inference in SLR

Transcription:

Overview of Meta-Analytic Data Analysis Transformations, Adjustments and Outliers The Inverse Variance Weight The Mean Effect Size and Associated Statistics Homogeneity Analysis Fixed Effects Analysis of Heterogeneous Distributions Fixed Effects Analog to the one-way ANOVA Fixed Effects Regression Analysis Random Effects Analysis of Heterogeneous Distributions Mean Random Effects ES and Associated Statistics Random Effects Analog to the one-way ANOVA Random Effects Regression Analysis Analysis Overheads Transformations Some effect size types are not analyzed in their raw form. Standardized Mean Difference Effect Size Upward bias when sample sizes are small Removed with the small sample size bias correction ES ' sm ES sm 3 4N 9 Analysis Overheads Transformations (continued) Transformations (continued) Correlation has a problematic standard error formula. Recall that the standard error is needed for the inverse variance weight. Solution: Fisher s Zr transformation. Finally results can be converted back into r with the inverse Zr transformation (see Chapter 3). Analyses performed on the Fisher s Zr transformed correlations. + r ES Zr.5ln r Finally results can be converted back into r with the inverse Zr transformation. e r e ESZr ESZr + Analysis Overheads 3 Analysis Overheads 4 Analysis Overheads

Transformations (continued) Odds-Ratio is asymmetric and has a complex standard error formula. Negative relationships indicated by values between 0 and. Positive relationships indicated by values between and infinity. Solution: Natural log of the Odds-Ratio. Negative relationship < 0. No relationship 0. Positive relationship > 0. Finally results can be converted back into Odds- Ratios by the inverse natural log function. Transformations (continued) Analyses performed on the natural log of the Odds- Ratio: ES LOR ln[ OR] Finally results converted back via inverse natural log function: ESLOR OR e Analysis Overheads 5 Analysis Overheads 6 Adjustments Hunter and Schmidt Artifact Adjustments measurement unreliability (need reliability coefficient) range restriction (need unrestricted standard deviation) artificial dichotomization (correlation effect sizes only) assumes a normal underlying distribution Outliers extreme effect sizes may have disproportionate influence on analysis either remove them from the analysis or adjust them to a less extreme value indicate what you have done in any written report Overview of Transformations, Adjustments, and Outliers Standard transformations sample sample size bias correction for the standardized mean difference effect size Fisher s Z to r transformation for correlation coefficients Natural log transformation for odds-ratios Hunter and Schmidt Adjustments perform if interested in what would have occurred under ideal research conditions Outliers any extreme effect sizes have been appropriately handled Analysis Overheads 7 Analysis Overheads 8 Analysis Overheads

Independent Set of Effect Sizes The Inverse Variance Weight Must be dealing with an independent set of effect sizes before proceeding with the analysis. One ES per study OR One ES per subsample within a study Studies generally vary in size. An ES based on 00 subjects is assumed to be a more precise estimate of the population ES than is an ES based on 0 subjects. Therefore, larger studies should carry more weight in our analyses than smaller studies. Simple approach: weight each ES by its sample size. Better approach: weight by the inverse variance. Analysis Overheads 9 Analysis Overheads 0 What is the Inverse Variance Weight? The standard error (SE) is a direct index of ES precision. SE is used to create confidence intervals. The smaller the SE, the more precise the ES. Hedges showed that the optimal weights for metaanalysis are: w SE Inverse Variance Weight for the Three Major League Effect Sizes Standardized Mean Difference: se n + n n n ES sm + ( n + n ) Zr transformed Correlation Coefficient: se n 3 w se w n 3 Analysis Overheads Analysis Overheads Analysis Overheads 3

Inverse Variance Weight for the Three Major League Effect Sizes Ready to Analyze Logged Odds-Ratio: se + + + a b c d w se We have an independent set of effect sizes (ES) that have been transformed and/or adjusted, if needed. For each effect size we have an inverse variance weight (w). Where a, b, c, and d are the cell frequencies of a by contingency table. Analysis Overheads 3 Analysis Overheads 4 The Weighted Mean Effect Size The Weighted Mean Effect Size Start with the effect size (ES) and inverse variance weight (w) for 0 studies. Study ES w -0.33.9 0.3 8.57 3 0.39 58.8 4 0.3 9.4 5 0.7 3.89 6 0.64 8.55 7-0.33 9.80 8 0.5 0.75 ( w ES ) ES 9-0.0 83.33 w 0 0.00 4.93 Study ES w w*es -0.33.9-3.93 0.3 8.57 3 0.39 58.8 4 0.3 9.4 5 0.7 3.89 6 0.64 8.55 7-0.33 9.80 8 0.5 0.75 9-0.0 83.33 0 0.00 4.93 Start with the effect size (ES) and inverse variance weight (w) for 0 studies. Next, multiply w by ES. Analysis Overheads 5 Analysis Overheads 6 Analysis Overheads 4

The Weighted Mean Effect Size The Weighted Mean Effect Size Study ES w w*es -0.33.9-3.93 0.3 8.57 9.4 3 0.39 58.8.94 4 0.3 9.4 9. 5 0.7 3.89.36 6 0.64 8.55 5.47 7-0.33 9.80-3.4 8 0.5 0.75.6 9-0.0 83.33 -.67 0 0.00 4.93 0.00 Start with the effect size (ES) and inverse variance weight (w) for 0 studies. Next, multiply w by ES. Repeat for all effect sizes. Study ES w w*es -0.33.9-3.93 0.3 8.57 9.4 3 0.39 58.8.94 4 0.3 9.4 9. 5 0.7 3.89.36 6 0.64 8.55 5.47 7-0.33 9.80-3.4 8 0.5 0.75.6 9-0.0 83.33 -.67 0 0.00 4.93 0.00 69.96 4.8 ES ( w ES ) 4.8 0.5 w 69.96 Start with the effect size (ES) and inverse variance weight (w) for 0 studies. Next, multiply w by ES. Repeat for all effect sizes. Sum the columns, w and ES. Divide the sum of (w*es) by the sum of (w). Analysis Overheads 7 Analysis Overheads 8 The Standard Error of the Mean ES Mean, Standard Error, Z-test and Confidence Intervals Study ES w w*es -0.33.9-3.93 0.3 8.57 9.4 3 0.39 58.8.94 4 0.3 9.4 9. 5 0.7 3.89.36 6 0.64 8.55 5.47 7-0.33 9.80-3.4 8 0.5 0.75.6 9-0.0 83.33 -.67 0 0.00 4.93 0.00 69.96 4.8 The standard error of the mean is the square root of divided by the sum of the weights. se ES w 0.06 69.96 Mean ES SE of the Mean ES ES se ES ( w ES ) 4.8 0.5 w 69.96 w Z-test for the Mean ES ES Z se ES 95% Confidence Interval Lower ES ES 0.06 69.96 0.5.46 0.06 ES.96( se ) 0.5.96(.06) 0.03 Upper ES +.96( se ) 0.5+.96(.06) 0.7 Analysis Overheads 9 Analysis Overheads 0 Analysis Overheads 5

Homogeneity Analysis Homogeneity analysis tests whether the assumption that all of the effect sizes are estimating the same population mean is a reasonable assumption. If homogeneity is rejected, the distribution of effect sizes is assumed to be heterogeneous. Single mean ES not a good descriptor of the distribution There are real between study differences, that is, studies estimate different population mean effect sizes. Two options: model between study differences fit a random effects model Q - The Homogeneity Statistic Study ES w w*es w*es^ -0.33.9-3.93.30 0.3 8.57 9.4.93 3 0.39 58.8.94 8.95 4 0.3 9.4 9..83 5 0.7 3.89.36 0.40 6 0.64 8.55 5.47 3.50 7-0.33 9.80-3.4.07 8 0.5 0.75.6 0.4 9-0.0 83.33 -.67 0.03 0 0.00 4.93 0.00 0.00 69.96 4.8.4 Calculate a new variable that is the ES squared multiplied by the weight. Sum new variable. Analysis Overheads Analysis Overheads Calculating Q We now have 3 sums: w 69.96 ( w ES ) 4.8 ( w ES ).4 Q is can be calculated using these 3 sums: [ ( )] w ES 4.8 Q ( w ES ).4.4 6.48 4.76 w 69.96 Analysis Overheads 3 Interpreting Q Q is distributed as a Chi-Square df number of ESs - Running example has 0 ESs, therefore, df 9 Critical Value for a Chi-Square with df 9 and p.05 is: 6.9 Since our Calculated Q (4.76) is less than 6.9, we fail to reject the null hypothesis of homogeneity. Thus, the variability across effect sizes does not exceed what would be expected based on sampling error. Analysis Overheads 4 Analysis Overheads 6

Heterogeneous Distributions: What Now? Analyzing Heterogeneous Distributions: The Analog to the ANOVA Analyze excess between study (ES) variability categorical variables with the analog to the one-way ANOVA continuous variables and/or multiple variables with weighted multiple regression Assume variability is random and fit a random effects model. Study Grp ES w w*es w*es^ -0.33.9-3.93.30 0.3 8.57 9.4.93 3 0.39 58.8.94 8.95 4 0.3 9.4 9..83 5 0.7 3.89.36 0.40 6 0.64 8.55 5.47 3.50 5.5 45.0 9.90 7-0.33 9.80-3.4.07 8 0.5 0.75.6 0.4 9-0.0 83.33 -.67 0.03 0 0.00 4.93 0.00 0.00 8.8-3.9.34 Calculate the 3 sums for each subgroup of effect sizes. Analysis Overheads 5 A grouping variable (e.g., random vs. nonrandom) Analysis Overheads 6 Analyzing Heterogeneous Distributions: The Analog to the ANOVA Calculate a separate Q for each group: Q GROUP Q GROUP 45.0 9.90 5.5 _ 3.9.34 8.8 _ 6.44.5 Analyzing Heterogeneous Distributions: The Analog to the ANOVA The sum of the individual group Qs Q within: Q W QGROUP GROUP df k j 0 8 _ + Q _ 6.44 +.5 7.69 Where k is the number of effect sizes and j is the number of groups. The difference between the Q total and the Q within is the Q between: Q B T W Q Q 4.76 7.69 7.07 df j Where j is the number of groups. Analysis Overheads 7 Analysis Overheads 8 Analysis Overheads 7

Analyzing Heterogeneous Distributions: The Analog to the ANOVA All we did was partition the overall Q into two pieces, a within groups Q and a between groups Q. Q 7.69 B Q 7.07 W QT 4.76 df B df 8 W dft 9 Q Q Q CV CV CV _.05 _.05 _.05 () 3.84 (8) 5.5 (9) 6.9 p <.05 B p >.05 W pt >.05 The grouping variable accounts for significant variability in effect sizes. Mean ES for each Group The mean ES, standard error and confidence intervals can be calculated for each group: ES GROUP ES GROUP _ ( w ES ) 45.0 w 5.5 _ 0.30 ( w ES ) 3.9 0.03 w 8.8 Analysis Overheads 9 Analysis Overheads 30 Analyzing Heterogeneous Distributions: Multiple Regression Analysis Analog to the ANOVA is restricted to a single categorical between studies variable. What if you are interested in a continuous variable or multiple between study variables? Weighted Multiple Regression Analysis as always, it is weighted analysis can use canned programs (e.g., SPSS, SAS) parameter estimates are correct (R -squared, B weights, etc.) F-tests, t -tests, and associated probabilities are incorrect can use Wilson/Lipsey SPSS macros which give correct parameters and probability values Analysis Overheads 3 Meta-Analytic Multiple Regression Results From the Wilson/Lipsey SPSS Macro (data set with 39 ESs) ***** Meta-Analytic Generalized OLS Regression ***** ------- Homogeneity Analysis ------- Q df p Model 04.9704 3.0000.0000 Residual 44.676 34.0000.0000 Partition of total Q into variance explained by the regression model and the variance left over ( residual ). ------- Regression Coefficients ------- B SE -95% CI +95% CI Z P Beta Constant -.778.095 -.9595 -.5970-8.470.0000.0000 RANDOM.0786.05.0364.07 3.6548.0003.696 TXVAR.5065.0753.3590.654 6.785.0000.933 TXVAR.64.03.88.094 7.036.0000.398 Interpretation is the same as will ordinal multiple regression analysis. If residual Q is significant, fit a mixed effects model. Analysis Overheads 3 Analysis Overheads 8

Review of Weighted Multiple Regression Analysis Analysis is weighted. Q for the model indicates if the regression model explains a significant portion of the variability across effect sizes. Q for the residual indicates if the remaining variability across effect sizes is homogeneous. If using a canned regression program, must correct the probability values (see manuscript for details). Random Effects Models Don t panic! It sounds far worse than it is. Three reasons to use a random effects model Total Q is significant and you assume that the excess variability across effect sizes derives from random differences across studies (sources you cannot identify or measure). The Q within from an Analog to the ANOVA is significant. The Q residual from a Weighted Multiple Regression analysis is significant. Analysis Overheads 33 Analysis Overheads 34 The Logic of a Random Effects Model Fixed effects model assumes that all of the variability between effect sizes is due to sampling error. Random effects model assumes that the variability between effect sizes is due to sampling error plus variability in the population of effects (unique differences in the set of true population effect sizes). Analysis Overheads 35 The Basic Procedure of a Random Effects Model Fixed effects model weights each study by the inverse of the sampling variance. w i se Random effects model weights each study by the inverse of the sampling variance plus a constant that represents the variability across the population effects. w i i se + vˆ i θ This is the random effects variance component. Analysis Overheads 36 Analysis Overheads 9

How To Estimate the Random Effects Variance Component Calculation of the Random Effects Variance Component The random effects variance component is based on Q. The formula is: v ˆθ Q k T w w w Study ES w w*es w*es^ w^ -0.33.9-3.93.30 4.73 0.3 8.57 9.4.93 86.30 3 0.39 58.8.94 8.95 3460.6 4 0.3 9.4 9..83 865.07 5 0.7 3.89.36 0.40 9.90 6 0.64 8.55 5.47 3.50 73.05 7-0.33 9.80-3.4.07 96. 8 0.5 0.75.6 0.4 5.63 9-0.0 83.33 -.67 0.03 6944.39 0 0.00 4.93 0.00 0.00.76 69.96 4.8.4 98. Calculate a new variable that is the w squared. Sum new variable. Analysis Overheads 37 Analysis Overheads 38 Calculation of the Random Effects Variance Component The total Q for this data was 4.76 k is the number of effect sizes (0) The sum of w 69.96 The sum of w,98. vˆ θ Q k 4.76 0 5.76 T 0.06,98. 69.96 47.89 w w 69.96 69.96 w Rerun Analysis with New Inverse Variance Weight Add the random effects variance component to the variance associated with each ES. Calculate a new weight. wi se + vˆ i θ Rerun analysis. Congratulations! You have just performed a very complex statistical analysis. Analysis Overheads 39 Analysis Overheads 40 Analysis Overheads 0

Random Effects Variance Component for the Analog to the ANOVA and Regression Analysis The Q between or Q residual replaces the Q total in the formula. Denominator gets a little more complex and relies on matrix algebra. However, the logic is the same. SPSS macros perform the calculation for you. SPSS Macro Output with Random Effects Variance Component ------- Homogeneity Analysis ------- Q df p Model 04.9704 3.0000.0000 Residual 44.676 34.0000.0000 ------- Regression Coefficients ------- B SE -95% CI +95% CI Z P Beta Constant -.778.095 -.9595 -.5970-8.470.0000.0000 RANDOM.0786.05.0364.07 3.6548.0003.696 TXVAR.5065.0753.3590.654 6.785.0000.933 TXVAR.64.03.88.094 7.036.0000.398 ------- Estimated Random Effects Variance Component ------- v.0475 Analysis Overheads 4 Not included in above model which is a fixed effects model Random effects variance component based on the residual Q. Add this value to each ES variance (SE squared) and recalculate w. Rerun analysis with the new w. Analysis Overheads 4 Comparison of Random Effect with Fixed Effect Results The biggest difference you will notice is in the significance levels and confidence intervals. Confidence intervals will get bigger. Effects that were significant under a fixed effect model may no longer be significant. Random effects models are therefore more conservative. Analysis Overheads 43 Review of Meta-Analytic Data Analysis Transformations, Adjustments and Outliers The Inverse Variance Weight The Mean Effect Size and Associated Statistics Homogeneity Analysis Fixed Effects Analysis of Heterogeneous Distributions Fixed Effects Analog to the one-way ANOVA Fixed Effects Regression Analysis Random Effects Analysis of Heterogeneous Distributions Mean Random Effects ES and Associated Statistics Random Effects Analog to the one-way ANOVA Random Effects Regression Analysis Analysis Overheads 44 Analysis Overheads