2. RELATIONSHIP BETWEEN A QUALITATIVE AND A QUANTITATIVE VARIABLE

Size: px
Start display at page:

Download "2. RELATIONSHIP BETWEEN A QUALITATIVE AND A QUANTITATIVE VARIABLE"

Transcription

1 7/09/06. RELATIONHIP BETWEEN A QUALITATIVE AND A QUANTITATIVE VARIABLE Design and Data Analysis in Psychology II usana anduvete Chaves alvadorchacón Moscoso. INTRODUCTION You may examine gender differences in average salary or racial (white versus black) differences in average annual income. The variable (salary) to be tested should be interval or ratio (quantitative), whereas the gender or race variable should be binary (qualitative).

2 7/09/06. INTRODUCTION These analyses are used to compare group means (independent or dependent groups). The term independentis used because groups are conformed randomly (independent samples). Two separate sets of independent and identically distributed samples are obtained. 3. INTRODUCTION The term dependent is used because groups are paired with respect to some measure (dependent samples). Dependent samples (or "paired") comparison consist of a sample of matched pairs of similar units, or one group of units that has been tested twice. 4

3 7/09/06. INTRODUCTION A typical example of the repeated measures would be where subjects are tested prior to a treatment, say for high blood pressure, and the same subjects are tested again after treatment with a blood-pressure lowering medication. 5. INTRODUCTION Three steps to carry out:. Testing statistical assumptions.. tatistical tests of significance. 3. tatistical power. 6 3

4 7/09/06. TEP. TETING AUMPTION Quantitative variable: Interval or ratio variable. Normal distribution: the populations from which the samples are selected must be normal (Tests for Normality: Kolmogorov- mirnov D + Histogram of predicted Z scores). 7. TEP. TETING AUMPTION The Central Limit Theorem says, however, that the distributions of y and y are approximately normal when N is large. In practice, when n + n 30, you do not need to worry too much about the normality assumption. 8 4

5 7/09/06. TEP. TETING AUMPTION Independence: the observations within each sample must be independent (Durbin-Watson D + scatter plot X-e). Homoscedasticity(homogeneity of variance): the populations from which the samples are selected must have equal variances. In this kind of studies, it is the most important assumption to take into account (Levene s test, F Max + scatter plot y -absolute errors). 9. TEP. TETING AUMPTION To test homoscedasticity: - Levene s test (P). - F-Max test: Fmax ( l argest) ( smallest) F F (α, k, n ) Null hypothesis is accepted. The homogeneity of variance assumption has been satisfied. F > F (α, k, n ) Null hypothesis is rejected. The homogeneity of variance assumption has been violated. * k number of groups (number of variances) n number of participants in each group (same size is assumed) 0 5

6 7/09/06. TEP. TETING AUMPTION: EXAMPLE Test the homogeneity of variance assumption in the following data (A nationality; Y level of depression; α0.05): a : panish a : Japanese TEP. TETING AUMPTION: EXAMPLE a a : a a Σ5 Σ75 Σ93 Σ58 Y Y Y Y N Y N Y Y Y N Y Y N

7 7/09/06. TEP. TETING AUMPTION: EXAMPLE Fmax ( l arg est) ( smallest).85 F (α, k, n ) F (0.05,, 0 ) 4.43 F F (α, k, n ) Null hypothesis is accepted. The homogeneity of variance assumption has been satisfied tep. tatistical Tests of ignificance 3.. tep. Non-parametric Tests of ignificance: assumptions are rejected 3.. tep. Parametric tests of significance: assumptions are accepted 4 7

8 7/09/ TEP : NON-PARAMETRIC TET 3... Wilcoxon T test. -When the population cannot be assumed to be normally distributed. - Quantitative dependent variable. - Two paired groups. - Formula: T + R i T R i T the smallest of these two rank sums 5 Conclusions: 3... Wilcoxon T test T T (α, N) The null hypothesis is accepted. The variables are not related. There are not differences between groups. T < T (α, N) The null hypothesis is rejected. The variables are related. There are differences between groups. * N number of pairs removing those which difference is

9 7/09/ Wilcoxon T test: example In the table below, you can see the punctuation in social skills that a twin who went to the kindergarten (a ) for a course, and the other twin who stayed at home (a ). Are there statistical differences between both groups in social skills? (α0.05). a a Wilcoxon T test: example a a d Rank ign

10 7/09/ Wilcoxon T test: example T T + 36 T 9 R i 9 T (α, N) T (0.05, 9) 5 T > T (α, N) 9 > 5 The null hypothesis is accepted. The variables are not related. There are not statistical differences between groups in social skills Mann-Whitney U - Quantitative or ordinal dependent variable. - Two independent groups. - Calculation: U nn U n n n ( n + ) + n( n + ) + U chosen the smallest one n smallest sample; n largest sample R R 0 0

11 7/09/06 Conclusions: 3... Mann-Whitney U U U (α, n, n) The null hypothesis is accepted. The variables are not related. There are not statistical differences between groups. U < U (α, n, n) The null hypothesis is rejected. The variables are related. There are statistical differences between groups Mann-Whitney U: example We want to study if two different reading methods imply different results in reading qualifications (α0.05). Type of method Qualifications Traditional (a ) New (a )

12 7/09/ Mann-Whitney U: example Type of method Qualifications ΣR Traditional (a ) 80 (4) 85 (5) 5 () 70 (3) 90 (6) 9 New (a ) 95 (8) 00 (9) 93 (7) 0 (0) 45 () 36 *Values into brackets ranks (orders) U U nn n n n ( n + ) + n( n + ) + 5(5 + ) R 5* (5 + ) R 5* U Mann-Whitney U: example U > U (0.05, 5, 5) 4 > The null hypothesis is accepted. The variables are not related. There are not statistical differences in reading qualifications depending on the reading method. 4

13 7/09/ Kruskal Wallis H - Quantitative or ordinal dependent variable. - k independent groups. - Formula: H k R N( N + ) n j j j 3( N + ) 5 Conclusions: Kruskal Wallis H H H (α, k, n, n, n3) The null hypothesis is accepted. The variables are not related. There are not statistical differences between groups. H > H (α, k, n, n, n3) The null hypothesis is rejected. The variables are related. There are statistical differences between groups. 6 3

14 7/09/ Kruskal Wallis H: example Do exist statistical differences in levels of authoritarianism between teachers from three different types of schools? (α0.05) tateschool (a ) Privateschool (a ) tate-subsidized school(a 3 ) Kruskal Wallis H: example tateschool (a ) Privateschool (a ) tate-subsidized school(a 3 ) 96 (4) 8 () 5 (7) 8 (9) 4 (8) 49 (3) 83 (3) 3 (0) 66 (4) 6 () 35 () 47 () 0 (5) 09 (6) ΣR ΣR 37 ΣR 3 46 *Values into brackets ranks (orders) 8 4

15 7/09/ Kruskal Wallis H: example k R H N ( N + ) n () 4(4 ) + 5 j j j 3( N (37) ) (46) 4 3(4 + ) 6.4 H > H (α, k, n, n, n3) 6.4 > The null hypothesis is rejected. The variables are related. There are statistical differences in levels of authoritarianism between teachers from different schools TEP. PARAMETRIC TET OF IGNIFICANCE 3... Linear regression analysis: qualitative independent variable with: -Two groups: lesson 3 (simple linear regression). -More than two groups: lesson 4 (multiple linear regression) T-tests (independent and dependent samples) One-way ANOVA F. 30 5

16 7/09/06 - Two groups T-tests - Quantitative dependent variable (Interval or ratio variable). - Different situations: Independent two-sample t-test. Equal sample sizes, equal variances in the distribution. Equal sample sizes, unequal variances in the distribution. Unequal sample sizes, equal variances in the distribution. Unequal sample sizes, unequal variances in the distribution. 3 Dependent t-test for paired samples T-tests: significance in all cases t > t ( theoretica l ) Null hypothesis is rejected. The model is valid. The slope is statistically different from 0. There is, therefore, relationship between variables. t t( theoretical) Null hypothesis is accepted. The model is not valid. The slope is statistically equal to 0. There is not, therefore, relationship between variables. 3 6

17 7/09/06 t 3... T-tests: two independent samples X X X Equal sample sizes, equal variances X n Where the grand standard deviation or pooled standard deviation is X X X + X The degrees of freedom for this test is N T-tests: two independent samples Equal sample sizes, equal variances: example A researcher wants to study the effect of depriving rats of water, and the time they spend in covering a labyrinth. 60 rats were assigned randomly to -hour deprivation group (sample A) and 4-hour deprivation group (sample B). The 30 rats in group A spent a mean of 5.9 minutes in covering the labyrinth, whereas the 30 rats in group B spent a mean of 6.4 minutes. The standard deviation were 0.9 and 0.3 respectively. Assuming that the two distributions have the same variance, did statistical differences between groups exist? (α0.05) 34 7

18 7/09/ T-tests: two independent samples Equal sample sizes, equal variances: example X t X XX n * XX X + X t > t ( α, N ) t > t (0.05,58) > Null hypothesis is rejected. There were statistical differences between groups T-tests: two independent samples Equal sample sizes, unequal variances X t X X n Where the grand standard deviation or pooled standard deviation is X X X The degrees of freedom for this test is: X + d. f. ( X / n ) ( / n + / n ) /( n ) + ( / n ) /( n ) 36 8

19 7/09/ T-tests: two independent samples Equal sample sizes, unequal variances: example If in the previous example about rats, we assume that the variances of the two distributions were unequal, would statistical differences between variances exist? (α0.05) T-tests: two independent samples Equal sample sizes, unequal variances: example t d. f. ( t > t ( α, d. f.) ( / n + / n ) ( 0.9 / /30) / n ) /( n ) + ( t > t (0.05,58) / n ) /( n 6.579> ) (0.9 /30) /(30 ) + (0.3 /30) /(30 ) Null hypothesis is rejected. There were statistical differences between groups 38 9

20 7/09/06 t where X X 3... T-tests (two independent samples) X X X X + n n Unequal sample sizes, equal variances ( n ) + ( n ) X X n + n The degrees of freedom for this test is n + n T-tests (two independent samples) Unequal sample sizes, equal variances: example A researcher wanted to study the level of relax and the flexibility. 9 gymnasts were assigned randomly to two groups. The first group (experimental) participated in a 4-week training program with relax and flexibility exercises; the second group (control) participated in the same program, but without relax exercises. The measures in flexibility obtained were the following: E: C: Variances are 58.7 in E and 4.67 in C. Populations present equal variances. Are there statistical differences between groups? (α0.05) 40 0

21 7/09/ T-tests (two independent samples) Unequal sample sizes, equal variances t X X E C X X X n n X X E X C + n n E C T-tests (two independent samples) Unequal sample sizes, equal variances X X ( n ) X + ( n ) X ( ) + ( 4 ) n + n X X 4* * t > t( α, n ) > (0.05,7) > n t t Null hypothesis is rejected. There were statistical differences between groups. 4

22 7/09/ T-tests (two independent samples) Unequal sample sizes, unequal variances t X X X X Where the not pooled standard deviation is: d. f. ( / n ) + X X n ( / n + / n ) /( n n ) + ( / n ) /( n ) T-tests (two independent samples) Unequal sample sizes, unequal variances: example upposing that in the previous example about gymnasts, the populations present unequal variances, are there differences between groups? (α0.05) 44

23 7/09/ T-tests (two independent samples) Unequal sample sizes, unequal variances: example t X X X X X X X X n + n T-tests (two independent samples) d. f. ( d. f. (58.7 / 5) / Unequal sample sizes, unequal variances: example ( / n + / n ) n ) /( n ) + ( / n) /( n ) ( 58.7 / / 4) /(5 ) + (4.67 / 4) ( ) d. f. (.74) / 4 + (0.47) / 3 /(4 ) d. f t > t t > t ( α, d. f ) (0.05,6.95) > / / 3 Null hypothesis is rejected. There were statistical differences between groups. 46 3

24 7/09/ T-tests (two dependent samples) Dependent t-test for paired samples D µ 0 t D D D D n j / n ( D D ) j + n n + n The constant µ 0 is non-zero if you want to test whether the average of the difference is significantly different from µ 0. The degree of freedom used is n T-tests (two dependent samples) Dependent t-test for paired samples A researcher wanted to study the level of relax and the flexibility. 9 gymnasts were measured before and after participating in a 4-week training program with relax and flexibility exercises. The measures in flexibility obtained were the following: Before (B): After (A) : Are there statistically differences between the measures obtained before and after the program? (α0.05) 48 4

25 7/09/ T-tests (two dependent samples) Dependent t-test for paired samples B A D j D j D ( D j D) t D D D 3... T-tests (two dependent samples) D µ 0 D D n j / n Dependent t-test for paired samples / ( D D ) j + n * 0.4 n n t > t(, n ) t > t(0.05,4) 3.57 >.776 α Null hypothesis is rejected. There were statistical 50 differences between groups. 5

26 7/09/ One-way ANOVA F While the t-test is limited to compare means of two groups, one-way ANOVA can compare more than two groups. Therefore, the t-test is considered a special case of one-way ANOVA One-way ANOVA F - Factor : Qualitative independent variable (number of values ; number of conditions). - Quantitative dependent variable (Interval or ratio variable). - Dependent variable normally distributed. - Independence of error effects. - Homogeneity of variance. - phericity (in longitudinal studies). 5 6

27 7/09/ One-way ANOVA F Types of hypothesis: General hypothesis : The purpose is to test for significant differences between group means, and this is done by analyzing the variances. It is a general decision. There is no specification of which pairs of groups are significantly different (concrete conditions). pecific hypothesis: There are specifications of which pairs of groups are significantly different (concrete conditions): Post hoc: firstly, the general hypothesis is tested; secondly, if null hypothesis is rejected, specific hypothesis are studied (section 3..3.). A priori: specific hypothesis are studied directly (section ). One-tailed vs. two-tailed hypothesis One-way ANOVA F Three steps to carry out: st. One-way ANOVA: (General hypothesis). ANOVA is an omnibustest statistic and cannot tell you which specific groups were significantly different from each other, only that at least two groups were. If null hypothesis is rejected: nd. Post-hoctest: (pecific hypothesis). To determine which specific groups differed from each other. 3 rd. TEP 3. tatistical power: goodness of fit. 54 7

28 7/09/ ANOVA table for one-way case ources of variation um of quares Degrees of Freedom Mean quare F Treatments Between groups K- b /df b Residuals/Error Within groups K (n-) W /df W M b /M w Total N Logic of ANOVA F crit (α 0.05 K-, K (n-)) 56 8

29 7/09/ One-way ANOVA: example Consider an experiment to study the effect of three different levels of some factor on a response (e.g. three types of fertilizer on plant growth). If we had 6 observations for each level, we could write the outcome of the experiment in a table like the following, where a, a, and a 3 are the three levels of the factor being studied. (α0.05) a a a One-way ANOVA example The null hypothesis, denoted H 0, for the overall F-test for this experiment would be that all three levels of the factor produce the same response, on average. To calculate the F- ratio: 58 9

30 7/09/ One-way ANOVA example a a a One-way ANOVA example tep : Calculate the mean within each group: 60 30

31 7/09/06 tep : Calculate the overall mean: One-way ANOVA example where ais the number of groups. tep 3: Calculate the "between-group" sum of squares: where nis the number of data values per group. The between-group degrees of freedom is the number of groups minus one df b 3 so the between-group mean square value is M B 84 / One-way ANOVA example tep 4: Calculate the "within-group" sum of squares. Begin by centering the data in each group a a a The within-group sum of squares is the sum of squares of all 8 values in this table W The within-group degrees of freedom is df W a(n ) 3(6 ) 5 Thus the within-group mean square value is 6 3

32 7/09/ One-way ANOVA example T df T 8-7 tep 5:The F-ratio is F crit (,5) 3.68 at α ince F 9.3 > 3.68, the results are significant at the 5% significance level. One would reject the null hypothesis, concluding that there is strong evidence that the expected values in the three groups differ One-way ANOVA example V df M F Between Within Total

33 7/09/ nd step. Post-hoc test: (pecific hypothesis): To determine which specific groups differed from each other. Post hoc tests can only be used when the 'omnibus' ANOVA found a significant effect. If the F-value for a factor turns out nonsignificant, you cannot go further with the analysis. This 'protects' the post hoc test from being (ab)used too liberally. They are designed to avoid hence of type I error. Tukey. cheffé. 65 DH Tukey: Post-hoc test Comparison of all pairs. Limitation: You can only compare pairs. HD test: Tukey's Honestly ignificant Difference. ψ ( HD) qα,wdf,k M n ( y i yj ) HDtukey There are sta s cally significant differences between groups. ( y i yj ) < HDtukey There are not sta s cally significant differences between groups. w Example: In the previous example, are there statistically significant differences between groups and 3? (α0.05) 66 33

34 7/09/ Post-hoc test: example Mw 4.5 ψ ( HD) qk ( n ), k q3(6 ),3 q5, * n 6 y y > 3.78 There are statistically significant differences between groups and cheffé: One-way ANOVA F Post-hoc tests More conservative. Recommended when few contrasts are developed. a j Y i Y i' ( k ) Fα, k,( n ) k Me n There are statistically significant differences between groups. a j Y i Y i' < ( k ) Fα, k,( n ) k Me n There are not statistically significant differences between groups. 68 p p 34

35 7/09/ One-way ANOVA F Post-hoc tests: example Example: Are there statistical differences between groups - and group 3? 69 ( k ) F F ( Y Y i, Y 0.05,,5 i' Y α, k,( n ) k One-way ANOVA F Post-hoc tests: example ) ( k ) F Y + Y M e α, k,( n ) k Y p n a * j 4.5 (3 ) F ,3,(6 )3 + + ( ) * >3 There are not statistical differences between groups - and group 3 M w p j n a 70 35

36 7/09/06 The p-value for this test is One-way ANOVA example After performing the F-test, it is common to carry out some "post-hoc" analysis of the group means. In this case, the first two group means differ by 4units, the first and third group means differ by 5units, and the second and third group means differ by only unit. Thus the first group is strongly different from the other groups, as the mean difference is more times the standard error, so we can be highly confident that the population of the first group differs from the population means of the other groups. However there is no evidence that the second and third groups have different population means from each other, as their mean difference of one unit is comparable to the standard error. Note F(x,y) denotes an F-distribution with xdegrees of freedom in the numerator and y degrees of freedom in the denominator A priori comparisons They are used when the general hypothesis is not of interest; only specific hypothesis are set out. nedecor F is not calculated. Types of a priori comparisons: Non-orthogonal contrasts: When lot of contrasts are done, type I error (α probability of being wrong when rejecting the null hypothesis) increases. When c>k-(cnumber of contrasts), correction of α is required. When c k-, some authors recommend the correction and others consider that it is not necessary. Orthogonal contrasts: α does not increase, so correction is not necessary (in post-hoc contrasts, the correction is not necessary either because cheffé and Tukey formulas already control this increase). 7 36

37 7/09/ A priori comparisons Formula to calculate bc : p n a j y j bc p a j A priori comparisons A. Orthogonal contrasts Characteristics: Their coefficients sum 0. If we multiply the coefficients of two orthogonal contrasts and sum the result, we also obtain 0. Tendency contrasts could be carried out: e.g., contrasts could be linear, quadratic and cubic. A F is going to be calculated for each contrast

38 7/09/ A priori comparisons A. Orthogonal contrasts k- possible contrasts. Example of coefficients to study tendency: a a a 3 a 4 Linear -3-3 Quadratic - - Cubic A priori comparisons A. Orthogonal contrasts Linear Cuadratic Cubic

39 7/09/ A priori comparisons A. Orthogonal contrasts Example : how to create orthogonal coefficients a a a 3 a 4 a 5 Contrast Contrast Contrast 3 Contrast A priori comparisons A. Orthogonal contrasts Example : how to create orthogonal coefficients a a a 3 a 4 a 5 Contrast Contrast Contrast Contrast

40 7/09/ A priori comparisons A. Orthogonal contrasts Example : how to create orthogonal coefficients a a a 3 a 4 a 5 Contrast Contrast Contrast 3 Contrast A priori comparisons A. Orthogonal contrasts Example : how to create orthogonal coefficients a a a 3 a 4 a 5 Contrast Contrast Contrast Contrast

41 7/09/ A priori comparisons A. Orthogonal contrasts Example(obtained from López, J., Trigo, M. E. y Arias, M. A.(999). Diseños experimentales. Planificación y análisis. evilla: Kronos): a a a Knowing that W 8, do this data present linear or quadratic tendency? (α0.05) A priori comparisons A. Orthogonal contrasts a a a y 78/ 6 3 y 48/ 6 8 y 4/

42 7/09/ A priori comparisons A. Orthogonal contrasts blinear n p p a a j j y j 6 [( )( 3) + ( 0)( 8) + ( + )( 4) ] ( ) + ( 0) + ( + ) bcuadratic n p p a a j j y j 6 [( + )( 3) + ( )( 8) + ( + )( 4) ] ( + ) + ( ) + ( + ) A priori comparisons A. Orthogonal contrasts V df M F Between 44 linear quadratic 0.7 Within Total 37 7 F (α, k-, k(n-)) F (0.05,,5) >4.54 There is linear tendency 0.7<4.54 There is not quadratic tendency 84 4

43 7/09/ A priori comparisons B. Non-orthogonal contrasts When lot of contrasts are done, type I error (α probability of being wrong when rejecting the null hypothesis) increases for the group of contrasts : Comparisons Type I error by contrast Type I error for the group of contrasts C C x 0. C x A priori comparisons B. Non-orthogonal contrasts Bonferroni: method for controlling the increase of α α α c C Being: α usually, 0.05 or 0.0 C number of contrasts Comparisons Type I error by contrast Type I error for the group of contrasts C 0.05/3 0.05/ C 0.05/3 (0.05/3) x 0.03 C3 0.05/3 (0.05/3) x The highest type I error for the group of contrasts is α 86 43

44 7/09/ One-way ANOVA F in longitudinal studies In a longitudinal study, each participant presents at least a score for each condition. The most important assumption to take into account is the sphericity. teps:. Testing assumptions.. One-way ANOVA F. 3. Effect size One-way ANOVA F in longitudinal studies tep. Testing assumptions: Normality:it is not important because F is robust even when it is violated. Independence of errors: it is usually violated because the measurements of the same person tends to be similar. Homoscedasticity: it is usually violated because the scores obtained closer in time are more similar. Although the two last assumptions are violated, F could be used if the relationship between variances and covariances are of sphericity

45 7/09/06 BETWEEN UBJECT WITHIN UBJECT tep. One-way ANOVA F V df M F n- n(k-) A k k- A /df A M A /M ( ) n y xa. j y.. ubjects x A (error) n k (n-)(k-) xa /df xa y + y y y TOTAL One-way ANOVA F in longitudinal studies n k i n i i i ( ) ij.. i.. j i j ( ) y i y k j k ( ) y ij y.. j... ( ) y y ij i. nk One-way ANOVA F in longitudinal studies tep. One-way ANOVA F Three stages test: It is used in order to try to avoid unnecessary calculations. Adjusted εhas to be calculated if we can not conclude based on the two first stages, and we have to carry out the third one. ε (epsilon measurement of the degree in which the assumption sphericity is violated) is going to be multiplied by the degrees of freedom of the theoretical F

46 7/09/ One-way ANOVA F in longitudinal studies tep. One-way ANOVA F tage : normal F (ε) (too easy to reject H 0 ). If H o is accepted, we do not have to do anything else. If H o is rejected, accepting this conclusion would imply to have a high level of possibility of committing type I error. We carry out stage. tage : conservative F; ε/(k-) (too difficult to reject H 0 ). If H o is rejected, we do not have to do anything else. If H o is accepted, we have to carry out stage 3. tage 3: F; /(k-)< ε < One-way ANOVA F in longitudinal studies Example. We would like to study the effect that different revisions of a material (a first revision; a second revision; a 3 third revision) produce in the memorization. Participants a a a

47 7/09/ One-way ANOVA F in longitudinal studies Participants a a a y. j 8 6 y i One-way ANOVA F in longitudinal studies y A [ ) + ( ) + (6 ) ] ( ) (8 3*6 9 (8 ) + (.67 ) + (.33 ) 3 + (.67 ) + (5 ) + (4.33 ) 3( )

48 7/09/ One-way ANOVA F in longitudinal studies Ax ( ) + ( ) + ( ) + ( ) + (+ 5 8) + ( ) + (8 + 8 ) + (+.67 ) + (0+.33 ) + (3+.67 ) + (6+ 5 ) + ( ) + (+ 8 6) + ( ) + ( ) + ( ) + (7+ 5 6) + ( ) One-way ANOVA F in longitudinal studies V df M F BETWEEN UBJECT WITHIN UBJECT 6.67 A ubjects x A (error) TOTAL

49 7/09/ One-way ANOVA F in longitudinal studies F (α, k-, (n-)(k-) F (0.05,, 0) 4. Three stages test: tage : normal F (ε). F (α, (k-)ε, (n-)(k-) ε F (0.05, *, 0*) >4. H o is rejected, so we have to carry out stage. tage : conservative F; ε/(k-)/(3-)0.5. F (α, (k-)ε, (n-)(k-) ε F (0.05, *0.5, 0*0.5) F (0.05,, 5) >6.6 H o is rejected, so we do not have to carry out stage 3. This is the final decision TEP 3. tatistical power After analyzing the existence of significant relation between variables, we have to verify whether our conclusionsare or not erroneousbecause of statistical power. The powerof a statistical test is the probability that the test will reject a false null hypothesis (i.e. that it will not make a type II error ). As power increases, the chances of a type II error decrease. The probability of a type II error is referred to as the false negative (β). Therefore, power is equal to β, which is equal to sensitivity

50 7/09/06 4. TEP 3. tatistical power DECIION H o is true Accept H o -α: Level of confidence H o is false β: Type II error Reject H o α : Type I error -β Power TEP 3. tatistical power Power analysis can be usedto calculate the minimum sample size required to accept the outcome of a statistical test with a particular level of confidence. It can also be used to calculate the minimum effect size that is likely to be detected in a study using a given sample size. In statistics, an effect size is a measure of the strength of the relationship between two variables in a statistical population, or a sample-based estimation of that quantity

51 7/09/06 4. TEP 3. tatistical power Pearson's correlation, often denoted r, is widely used as an effect size when two quantitative variables are available; for instance, if one was studying the relationship between birth weight and longevity. A related effect size is the coefficient of determination(r-squared R ). In the case of two variables, this is a measure of the proportion of variance shared by both, and varies from 0 to. An R of 0. means that % of the variance of a variable is shared with the other variable TEP 3. tatistical power Effect size: R B T Low: 0.8 Medium: [ ] High:

52 7/09/06 4. TEP 3. TATITICAL POWER. EFFECT IZE. GOODNE OF FIT ignificant effect Non-significant effect High effect size Low effect size The effect probably exists The statistical significance can be due to an excessive high statistical power The nonsignificance can be due to low statistical power The effect probably does not exist TEP 3. TATITICAL POWER. EFFECT IZE. GOODNE OF FIT: EXAMPLE Based on the example presented in slide 57 (first example of F), conclude about the statistical power. V df M F Between Within Total

53 7/09/06 4. TEP 3. TATITICAL POWER. EFFECT IZE. GOODNE OF FIT: EXAMPLE Medium effect size: R B T ignificant: 9.3 > 3.68 Conclusion: The effect probably exists 05 53

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics DETAILED CONTENTS About the Author Preface to the Instructor To the Student How to Use SPSS With This Book PART I INTRODUCTION AND DESCRIPTIVE STATISTICS 1. Introduction to Statistics 1.1 Descriptive and

More information

SEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics

SEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics SEVERAL μs AND MEDIANS: MORE ISSUES Business Statistics CONTENTS Post-hoc analysis ANOVA for 2 groups The equal variances assumption The Kruskal-Wallis test Old exam question Further study POST-HOC ANALYSIS

More information

Types of Statistical Tests DR. MIKE MARRAPODI

Types of Statistical Tests DR. MIKE MARRAPODI Types of Statistical Tests DR. MIKE MARRAPODI Tests t tests ANOVA Correlation Regression Multivariate Techniques Non-parametric t tests One sample t test Independent t test Paired sample t test One sample

More information

Degrees of freedom df=1. Limitations OR in SPSS LIM: Knowing σ and µ is unlikely in large

Degrees of freedom df=1. Limitations OR in SPSS LIM: Knowing σ and µ is unlikely in large Z Test Comparing a group mean to a hypothesis T test (about 1 mean) T test (about 2 means) Comparing mean to sample mean. Similar means = will have same response to treatment Two unknown means are different

More information

4/6/16. Non-parametric Test. Overview. Stephen Opiyo. Distinguish Parametric and Nonparametric Test Procedures

4/6/16. Non-parametric Test. Overview. Stephen Opiyo. Distinguish Parametric and Nonparametric Test Procedures Non-parametric Test Stephen Opiyo Overview Distinguish Parametric and Nonparametric Test Procedures Explain commonly used Nonparametric Test Procedures Perform Hypothesis Tests Using Nonparametric Procedures

More information

PSY 307 Statistics for the Behavioral Sciences. Chapter 20 Tests for Ranked Data, Choosing Statistical Tests

PSY 307 Statistics for the Behavioral Sciences. Chapter 20 Tests for Ranked Data, Choosing Statistical Tests PSY 307 Statistics for the Behavioral Sciences Chapter 20 Tests for Ranked Data, Choosing Statistical Tests What To Do with Non-normal Distributions Tranformations (pg 382): The shape of the distribution

More information

HYPOTHESIS TESTING II TESTS ON MEANS. Sorana D. Bolboacă

HYPOTHESIS TESTING II TESTS ON MEANS. Sorana D. Bolboacă HYPOTHESIS TESTING II TESTS ON MEANS Sorana D. Bolboacă OBJECTIVES Significance value vs p value Parametric vs non parametric tests Tests on means: 1 Dec 14 2 SIGNIFICANCE LEVEL VS. p VALUE Materials and

More information

Statistical Tests for Computational Intelligence Research and Human Subjective Tests

Statistical Tests for Computational Intelligence Research and Human Subjective Tests tatistical Tests for Computational Intelligence Research and Human ubjective Tests lides are downloadable from http://www.design.kyushu-u.ac.jp/~takagi Hideyuki TAKAGI Kyushu University, Japan http://www.design.kyushu-u.ac.jp/~takagi/

More information

Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs

Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs The Analysis of Variance (ANOVA) The analysis of variance (ANOVA) is a statistical technique

More information

COMPARING SEVERAL MEANS: ANOVA

COMPARING SEVERAL MEANS: ANOVA LAST UPDATED: November 15, 2012 COMPARING SEVERAL MEANS: ANOVA Objectives 2 Basic principles of ANOVA Equations underlying one-way ANOVA Doing a one-way ANOVA in R Following up an ANOVA: Planned contrasts/comparisons

More information

Introduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p.

Introduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p. Preface p. xi Introduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p. 6 The Scientific Method and the Design of

More information

Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) Analysis of Variance (ANOVA) Two types of ANOVA tests: Independent measures and Repeated measures Comparing 2 means: X 1 = 20 t - test X 2 = 30 How can we Compare 3 means?: X 1 = 20 X 2 = 30 X 3 = 35 ANOVA

More information

Nonparametric Statistics

Nonparametric Statistics Nonparametric Statistics Nonparametric or Distribution-free statistics: used when data are ordinal (i.e., rankings) used when ratio/interval data are not normally distributed (data are converted to ranks)

More information

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages: Glossary The ISI glossary of statistical terms provides definitions in a number of different languages: http://isi.cbs.nl/glossary/index.htm Adjusted r 2 Adjusted R squared measures the proportion of the

More information

CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)

CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007) FROM: PAGANO, R. R. (007) I. INTRODUCTION: DISTINCTION BETWEEN PARAMETRIC AND NON-PARAMETRIC TESTS Statistical inference tests are often classified as to whether they are parametric or nonparametric Parameter

More information

Comparing Several Means: ANOVA

Comparing Several Means: ANOVA Comparing Several Means: ANOVA Understand the basic principles of ANOVA Why it is done? What it tells us? Theory of one way independent ANOVA Following up an ANOVA: Planned contrasts/comparisons Choosing

More information

Statistics for Managers Using Microsoft Excel Chapter 10 ANOVA and Other C-Sample Tests With Numerical Data

Statistics for Managers Using Microsoft Excel Chapter 10 ANOVA and Other C-Sample Tests With Numerical Data Statistics for Managers Using Microsoft Excel Chapter 10 ANOVA and Other C-Sample Tests With Numerical Data 1999 Prentice-Hall, Inc. Chap. 10-1 Chapter Topics The Completely Randomized Model: One-Factor

More information

DESIGNING EXPERIMENTS AND ANALYZING DATA A Model Comparison Perspective

DESIGNING EXPERIMENTS AND ANALYZING DATA A Model Comparison Perspective DESIGNING EXPERIMENTS AND ANALYZING DATA A Model Comparison Perspective Second Edition Scott E. Maxwell Uniuersity of Notre Dame Harold D. Delaney Uniuersity of New Mexico J,t{,.?; LAWRENCE ERLBAUM ASSOCIATES,

More information

Basic Statistical Analysis

Basic Statistical Analysis indexerrt.qxd 8/21/2002 9:47 AM Page 1 Corrected index pages for Sprinthall Basic Statistical Analysis Seventh Edition indexerrt.qxd 8/21/2002 9:47 AM Page 656 Index Abscissa, 24 AB-STAT, vii ADD-OR rule,

More information

The One-Way Independent-Samples ANOVA. (For Between-Subjects Designs)

The One-Way Independent-Samples ANOVA. (For Between-Subjects Designs) The One-Way Independent-Samples ANOVA (For Between-Subjects Designs) Computations for the ANOVA In computing the terms required for the F-statistic, we won t explicitly compute any sample variances or

More information

Non-parametric (Distribution-free) approaches p188 CN

Non-parametric (Distribution-free) approaches p188 CN Week 1: Introduction to some nonparametric and computer intensive (re-sampling) approaches: the sign test, Wilcoxon tests and multi-sample extensions, Spearman s rank correlation; the Bootstrap. (ch14

More information

Example: Four levels of herbicide strength in an experiment on dry weight of treated plants.

Example: Four levels of herbicide strength in an experiment on dry weight of treated plants. The idea of ANOVA Reminders: A factor is a variable that can take one of several levels used to differentiate one group from another. An experiment has a one-way, or completely randomized, design if several

More information

CHI SQUARE ANALYSIS 8/18/2011 HYPOTHESIS TESTS SO FAR PARAMETRIC VS. NON-PARAMETRIC

CHI SQUARE ANALYSIS 8/18/2011 HYPOTHESIS TESTS SO FAR PARAMETRIC VS. NON-PARAMETRIC CHI SQUARE ANALYSIS I N T R O D U C T I O N T O N O N - P A R A M E T R I C A N A L Y S E S HYPOTHESIS TESTS SO FAR We ve discussed One-sample t-test Dependent Sample t-tests Independent Samples t-tests

More information

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES BIOL 458 - Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES PART 1: INTRODUCTION TO ANOVA Purpose of ANOVA Analysis of Variance (ANOVA) is an extremely useful statistical method

More information

Lecture 7: Hypothesis Testing and ANOVA

Lecture 7: Hypothesis Testing and ANOVA Lecture 7: Hypothesis Testing and ANOVA Goals Overview of key elements of hypothesis testing Review of common one and two sample tests Introduction to ANOVA Hypothesis Testing The intent of hypothesis

More information

What is a Hypothesis?

What is a Hypothesis? What is a Hypothesis? A hypothesis is a claim (assumption) about a population parameter: population mean Example: The mean monthly cell phone bill in this city is μ = $42 population proportion Example:

More information

Group comparison test for independent samples

Group comparison test for independent samples Group comparison test for independent samples The purpose of the Analysis of Variance (ANOVA) is to test for significant differences between means. Supposing that: samples come from normal populations

More information

Chapter 7 Comparison of two independent samples

Chapter 7 Comparison of two independent samples Chapter 7 Comparison of two independent samples 7.1 Introduction Population 1 µ σ 1 1 N 1 Sample 1 y s 1 1 n 1 Population µ σ N Sample y s n 1, : population means 1, : population standard deviations N

More information

Hypothesis Testing hypothesis testing approach

Hypothesis Testing hypothesis testing approach Hypothesis Testing In this case, we d be trying to form an inference about that neighborhood: Do people there shop more often those people who are members of the larger population To ascertain this, we

More information

Analysis of variance (ANOVA) Comparing the means of more than two groups

Analysis of variance (ANOVA) Comparing the means of more than two groups Analysis of variance (ANOVA) Comparing the means of more than two groups Example: Cost of mating in male fruit flies Drosophila Treatments: place males with and without unmated (virgin) females Five treatments

More information

Introduction to the Analysis of Variance (ANOVA)

Introduction to the Analysis of Variance (ANOVA) Introduction to the Analysis of Variance (ANOVA) The Analysis of Variance (ANOVA) The analysis of variance (ANOVA) is a statistical technique for testing for differences between the means of multiple (more

More information

Inferences About the Difference Between Two Means

Inferences About the Difference Between Two Means 7 Inferences About the Difference Between Two Means Chapter Outline 7.1 New Concepts 7.1.1 Independent Versus Dependent Samples 7.1. Hypotheses 7. Inferences About Two Independent Means 7..1 Independent

More information

Hypothesis Testing. Hypothesis: conjecture, proposition or statement based on published literature, data, or a theory that may or may not be true

Hypothesis Testing. Hypothesis: conjecture, proposition or statement based on published literature, data, or a theory that may or may not be true Hypothesis esting Hypothesis: conjecture, proposition or statement based on published literature, data, or a theory that may or may not be true Statistical Hypothesis: conjecture about a population parameter

More information

22s:152 Applied Linear Regression. Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA)

22s:152 Applied Linear Regression. Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA) 22s:152 Applied Linear Regression Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA) We now consider an analysis with only categorical predictors (i.e. all predictors are

More information

Multiple t Tests. Introduction to Analysis of Variance. Experiments with More than 2 Conditions

Multiple t Tests. Introduction to Analysis of Variance. Experiments with More than 2 Conditions Introduction to Analysis of Variance 1 Experiments with More than 2 Conditions Often the research that psychologists perform has more conditions than just the control and experimental conditions You might

More information

One-way between-subjects ANOVA. Comparing three or more independent means

One-way between-subjects ANOVA. Comparing three or more independent means One-way between-subjects ANOVA Comparing three or more independent means Data files SpiderBG.sav Attractiveness.sav Homework: sourcesofself-esteem.sav ANOVA: A Framework Understand the basic principles

More information

Analysis of variance

Analysis of variance Analysis of variance Tron Anders Moger 3.0.007 Comparing more than two groups Up to now we have studied situations with One observation per subject One group Two groups Two or more observations per subject

More information

The One-Way Repeated-Measures ANOVA. (For Within-Subjects Designs)

The One-Way Repeated-Measures ANOVA. (For Within-Subjects Designs) The One-Way Repeated-Measures ANOVA (For Within-Subjects Designs) Logic of the Repeated-Measures ANOVA The repeated-measures ANOVA extends the analysis of variance to research situations using repeated-measures

More information

Introduction to hypothesis testing

Introduction to hypothesis testing Introduction to hypothesis testing Review: Logic of Hypothesis Tests Usually, we test (attempt to falsify) a null hypothesis (H 0 ): includes all possibilities except prediction in hypothesis (H A ) If

More information

Parametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami

Parametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami Parametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami Parametric Assumptions The observations must be independent. Dependent variable should be continuous

More information

This gives us an upper and lower bound that capture our population mean.

This gives us an upper and lower bound that capture our population mean. Confidence Intervals Critical Values Practice Problems 1 Estimation 1.1 Confidence Intervals Definition 1.1 Margin of error. The margin of error of a distribution is the amount of error we predict when

More information

Multiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600

Multiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600 Multiple Comparison Procedures Cohen Chapter 13 For EDUC/PSY 6600 1 We have to go to the deductions and the inferences, said Lestrade, winking at me. I find it hard enough to tackle facts, Holmes, without

More information

Nonparametric Location Tests: k-sample

Nonparametric Location Tests: k-sample Nonparametric Location Tests: k-sample Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 04-Jan-2017 Nathaniel E. Helwig (U of Minnesota)

More information

Introduction to Statistical Inference Lecture 10: ANOVA, Kruskal-Wallis Test

Introduction to Statistical Inference Lecture 10: ANOVA, Kruskal-Wallis Test Introduction to Statistical Inference Lecture 10: ANOVA, Kruskal-Wallis Test la Contents The two sample t-test generalizes into Analysis of Variance. In analysis of variance ANOVA the population consists

More information

Non-parametric tests, part A:

Non-parametric tests, part A: Two types of statistical test: Non-parametric tests, part A: Parametric tests: Based on assumption that the data have certain characteristics or "parameters": Results are only valid if (a) the data are

More information

ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4)

ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4) ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4) ERSH 8310 Fall 2007 September 11, 2007 Today s Class The need for analytic comparisons. Planned comparisons. Comparisons among treatment means.

More information

Week 14 Comparing k(> 2) Populations

Week 14 Comparing k(> 2) Populations Week 14 Comparing k(> 2) Populations Week 14 Objectives Methods associated with testing for the equality of k(> 2) means or proportions are presented. Post-testing concepts and analysis are introduced.

More information

Rank-Based Methods. Lukas Meier

Rank-Based Methods. Lukas Meier Rank-Based Methods Lukas Meier 20.01.2014 Introduction Up to now we basically always used a parametric family, like the normal distribution N (µ, σ 2 ) for modeling random data. Based on observed data

More information

10/31/2012. One-Way ANOVA F-test

10/31/2012. One-Way ANOVA F-test PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic 3.Distribution 4. Assumptions One-Way ANOVA F-test One factor J>2 independent samples

More information

13: Additional ANOVA Topics

13: Additional ANOVA Topics 13: Additional ANOVA Topics Post hoc comparisons Least squared difference The multiple comparisons problem Bonferroni ANOVA assumptions Assessing equal variance When assumptions are severely violated Kruskal-Wallis

More information

Multiple Comparisons

Multiple Comparisons Multiple Comparisons Error Rates, A Priori Tests, and Post-Hoc Tests Multiple Comparisons: A Rationale Multiple comparison tests function to tease apart differences between the groups within our IV when

More information

STAT 135 Lab 9 Multiple Testing, One-Way ANOVA and Kruskal-Wallis

STAT 135 Lab 9 Multiple Testing, One-Way ANOVA and Kruskal-Wallis STAT 135 Lab 9 Multiple Testing, One-Way ANOVA and Kruskal-Wallis Rebecca Barter April 6, 2015 Multiple Testing Multiple Testing Recall that when we were doing two sample t-tests, we were testing the equality

More information

Analysis of Variance

Analysis of Variance Analysis of Variance Blood coagulation time T avg A 62 60 63 59 61 B 63 67 71 64 65 66 66 C 68 66 71 67 68 68 68 D 56 62 60 61 63 64 63 59 61 64 Blood coagulation time A B C D Combined 56 57 58 59 60 61

More information

2 Hand-out 2. Dr. M. P. M. M. M c Loughlin Revised 2018

2 Hand-out 2. Dr. M. P. M. M. M c Loughlin Revised 2018 Math 403 - P. & S. III - Dr. McLoughlin - 1 2018 2 Hand-out 2 Dr. M. P. M. M. M c Loughlin Revised 2018 3. Fundamentals 3.1. Preliminaries. Suppose we can produce a random sample of weights of 10 year-olds

More information

Regression models. Categorical covariate, Quantitative outcome. Examples of categorical covariates. Group characteristics. Faculty of Health Sciences

Regression models. Categorical covariate, Quantitative outcome. Examples of categorical covariates. Group characteristics. Faculty of Health Sciences Faculty of Health Sciences Categorical covariate, Quantitative outcome Regression models Categorical covariate, Quantitative outcome Lene Theil Skovgaard April 29, 2013 PKA & LTS, Sect. 3.2, 3.2.1 ANOVA

More information

Tentative solutions TMA4255 Applied Statistics 16 May, 2015

Tentative solutions TMA4255 Applied Statistics 16 May, 2015 Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Tentative solutions TMA455 Applied Statistics 6 May, 05 Problem Manufacturer of fertilizers a) Are these independent

More information

16.400/453J Human Factors Engineering. Design of Experiments II

16.400/453J Human Factors Engineering. Design of Experiments II J Human Factors Engineering Design of Experiments II Review Experiment Design and Descriptive Statistics Research question, independent and dependent variables, histograms, box plots, etc. Inferential

More information

Hypothesis T e T sting w ith with O ne O One-Way - ANOV ANO A V Statistics Arlo Clark Foos -

Hypothesis T e T sting w ith with O ne O One-Way - ANOV ANO A V Statistics Arlo Clark Foos - Hypothesis Testing with One-Way ANOVA Statistics Arlo Clark-Foos Conceptual Refresher 1. Standardized z distribution of scores and of means can be represented as percentile rankings. 2. t distribution

More information

http://www.statsoft.it/out.php?loc=http://www.statsoft.com/textbook/ Group comparison test for independent samples The purpose of the Analysis of Variance (ANOVA) is to test for significant differences

More information

The independent-means t-test:

The independent-means t-test: The independent-means t-test: Answers the question: is there a "real" difference between the two conditions in my experiment? Or is the difference due to chance? Previous lecture: (a) Dependent-means t-test:

More information

What Is ANOVA? Comparing Groups. One-way ANOVA. One way ANOVA (the F ratio test)

What Is ANOVA? Comparing Groups. One-way ANOVA. One way ANOVA (the F ratio test) What Is ANOVA? One-way ANOVA ANOVA ANalysis Of VAriance ANOVA compares the means of several groups. The groups are sometimes called "treatments" First textbook presentation in 95. Group Group σ µ µ σ µ

More information

An Analysis of College Algebra Exam Scores December 14, James D Jones Math Section 01

An Analysis of College Algebra Exam Scores December 14, James D Jones Math Section 01 An Analysis of College Algebra Exam s December, 000 James D Jones Math - Section 0 An Analysis of College Algebra Exam s Introduction Students often complain about a test being too difficult. Are there

More information

Statistical. Psychology

Statistical. Psychology SEVENTH у *i km m it* & П SB Й EDITION Statistical M e t h o d s for Psychology D a v i d C. Howell University of Vermont ; \ WADSWORTH f% CENGAGE Learning* Australia Biaall apan Korea Меяко Singapore

More information

Analysis of Variance

Analysis of Variance Statistical Techniques II EXST7015 Analysis of Variance 15a_ANOVA_Introduction 1 Design The simplest model for Analysis of Variance (ANOVA) is the CRD, the Completely Randomized Design This model is also

More information

22s:152 Applied Linear Regression. 1-way ANOVA visual:

22s:152 Applied Linear Regression. 1-way ANOVA visual: 22s:152 Applied Linear Regression 1-way ANOVA visual: Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Y We now consider an analysis

More information

One-Way ANOVA Cohen Chapter 12 EDUC/PSY 6600

One-Way ANOVA Cohen Chapter 12 EDUC/PSY 6600 One-Way ANOVA Cohen Chapter 1 EDUC/PSY 6600 1 It is easy to lie with statistics. It is hard to tell the truth without statistics. -Andrejs Dunkels Motivating examples Dr. Vito randomly assigns 30 individuals

More information

Statistics Handbook. All statistical tables were computed by the author.

Statistics Handbook. All statistical tables were computed by the author. Statistics Handbook Contents Page Wilcoxon rank-sum test (Mann-Whitney equivalent) Wilcoxon matched-pairs test 3 Normal Distribution 4 Z-test Related samples t-test 5 Unrelated samples t-test 6 Variance

More information

Introduction to Crossover Trials

Introduction to Crossover Trials Introduction to Crossover Trials Stat 6500 Tutorial Project Isaac Blackhurst A crossover trial is a type of randomized control trial. It has advantages over other designed experiments because, under certain

More information

PSYC 331 STATISTICS FOR PSYCHOLOGISTS

PSYC 331 STATISTICS FOR PSYCHOLOGISTS PSYC 331 STATISTICS FOR PSYCHOLOGISTS Session 4 A PARAMETRIC STATISTICAL TEST FOR MORE THAN TWO POPULATIONS Lecturer: Dr. Paul Narh Doku, Dept of Psychology, UG Contact Information: pndoku@ug.edu.gh College

More information

Nonparametric Statistics. Leah Wright, Tyler Ross, Taylor Brown

Nonparametric Statistics. Leah Wright, Tyler Ross, Taylor Brown Nonparametric Statistics Leah Wright, Tyler Ross, Taylor Brown Before we get to nonparametric statistics, what are parametric statistics? These statistics estimate and test population means, while holding

More information

Chap The McGraw-Hill Companies, Inc. All rights reserved.

Chap The McGraw-Hill Companies, Inc. All rights reserved. 11 pter11 Chap Analysis of Variance Overview of ANOVA Multiple Comparisons Tests for Homogeneity of Variances Two-Factor ANOVA Without Replication General Linear Model Experimental Design: An Overview

More information

Chapter 18 Resampling and Nonparametric Approaches To Data

Chapter 18 Resampling and Nonparametric Approaches To Data Chapter 18 Resampling and Nonparametric Approaches To Data 18.1 Inferences in children s story summaries (McConaughy, 1980): a. Analysis using Wilcoxon s rank-sum test: Younger Children Older Children

More information

= 1 i. normal approximation to χ 2 df > df

= 1 i. normal approximation to χ 2 df > df χ tests 1) 1 categorical variable χ test for goodness-of-fit ) categorical variables χ test for independence (association, contingency) 3) categorical variables McNemar's test for change χ df k (O i 1

More information

Exam Applied Statistical Regression. Good Luck!

Exam Applied Statistical Regression. Good Luck! Dr. M. Dettling Summer 2011 Exam Applied Statistical Regression Approved: Tables: Note: Any written material, calculator (without communication facility). Attached. All tests have to be done at the 5%-level.

More information

SPSS Guide For MMI 409

SPSS Guide For MMI 409 SPSS Guide For MMI 409 by John Wong March 2012 Preface Hopefully, this document can provide some guidance to MMI 409 students on how to use SPSS to solve many of the problems covered in the D Agostino

More information

Review. One-way ANOVA, I. What s coming up. Multiple comparisons

Review. One-way ANOVA, I. What s coming up. Multiple comparisons Review One-way ANOVA, I 9.07 /15/00 Earlier in this class, we talked about twosample z- and t-tests for the difference between two conditions of an independent variable Does a trial drug work better than

More information

Transition Passage to Descriptive Statistics 28

Transition Passage to Descriptive Statistics 28 viii Preface xiv chapter 1 Introduction 1 Disciplines That Use Quantitative Data 5 What Do You Mean, Statistics? 6 Statistics: A Dynamic Discipline 8 Some Terminology 9 Problems and Answers 12 Scales of

More information

Chapter Seven: Multi-Sample Methods 1/52

Chapter Seven: Multi-Sample Methods 1/52 Chapter Seven: Multi-Sample Methods 1/52 7.1 Introduction 2/52 Introduction The independent samples t test and the independent samples Z test for a difference between proportions are designed to analyze

More information

Preview from Notesale.co.uk Page 3 of 63

Preview from Notesale.co.uk Page 3 of 63 Stem-and-leaf diagram - vertical numbers on far left represent the 10s, numbers right of the line represent the 1s The mean should not be used if there are extreme scores, or for ranks and categories Unbiased

More information

13: Additional ANOVA Topics. Post hoc Comparisons

13: Additional ANOVA Topics. Post hoc Comparisons 13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Post hoc Comparisons In the prior chapter we used ANOVA

More information

Extending the Robust Means Modeling Framework. Alyssa Counsell, Phil Chalmers, Matt Sigal, Rob Cribbie

Extending the Robust Means Modeling Framework. Alyssa Counsell, Phil Chalmers, Matt Sigal, Rob Cribbie Extending the Robust Means Modeling Framework Alyssa Counsell, Phil Chalmers, Matt Sigal, Rob Cribbie One-way Independent Subjects Design Model: Y ij = µ + τ j + ε ij, j = 1,, J Y ij = score of the ith

More information

Statistical Inference Theory Lesson 46 Non-parametric Statistics

Statistical Inference Theory Lesson 46 Non-parametric Statistics 46.1-The Sign Test Statistical Inference Theory Lesson 46 Non-parametric Statistics 46.1 - Problem 1: (a). Let p equal the proportion of supermarkets that charge less than $2.15 a pound. H o : p 0.50 H

More information

Introduction to Statistical Data Analysis III

Introduction to Statistical Data Analysis III Introduction to Statistical Data Analysis III JULY 2011 Afsaneh Yazdani Preface Major branches of Statistics: - Descriptive Statistics - Inferential Statistics Preface What is Inferential Statistics? The

More information

ANOVA. Testing more than 2 conditions

ANOVA. Testing more than 2 conditions ANOVA Testing more than 2 conditions ANOVA Today s goal: Teach you about ANOVA, the test used to measure the difference between more than two conditions Outline: - Why anova? - Contrasts and post-hoc tests

More information

Hypothesis testing, part 2. With some material from Howard Seltman, Blase Ur, Bilge Mutlu, Vibha Sazawal

Hypothesis testing, part 2. With some material from Howard Seltman, Blase Ur, Bilge Mutlu, Vibha Sazawal Hypothesis testing, part 2 With some material from Howard Seltman, Blase Ur, Bilge Mutlu, Vibha Sazawal 1 CATEGORICAL IV, NUMERIC DV 2 Independent samples, one IV # Conditions Normal/Parametric Non-parametric

More information

4.1. Introduction: Comparing Means

4.1. Introduction: Comparing Means 4. Analysis of Variance (ANOVA) 4.1. Introduction: Comparing Means Consider the problem of testing H 0 : µ 1 = µ 2 against H 1 : µ 1 µ 2 in two independent samples of two different populations of possibly

More information

Biostatistics 270 Kruskal-Wallis Test 1. Kruskal-Wallis Test

Biostatistics 270 Kruskal-Wallis Test 1. Kruskal-Wallis Test Biostatistics 270 Kruskal-Wallis Test 1 ORIGIN 1 Kruskal-Wallis Test The Kruskal-Wallis is a non-parametric analog to the One-Way ANOVA F-Test of means. It is useful when the k samples appear not to come

More information

One-way between-subjects ANOVA. Comparing three or more independent means

One-way between-subjects ANOVA. Comparing three or more independent means One-way between-subjects ANOVA Comparing three or more independent means ANOVA: A Framework Understand the basic principles of ANOVA Why it is done? What it tells us? Theory of one-way between-subjects

More information

22s:152 Applied Linear Regression. Take random samples from each of m populations.

22s:152 Applied Linear Regression. Take random samples from each of m populations. 22s:152 Applied Linear Regression Chapter 8: ANOVA NOTE: We will meet in the lab on Monday October 10. One-way ANOVA Focuses on testing for differences among group means. Take random samples from each

More information

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

One-Way ANOVA. Some examples of when ANOVA would be appropriate include: One-Way ANOVA 1. Purpose Analysis of variance (ANOVA) is used when one wishes to determine whether two or more groups (e.g., classes A, B, and C) differ on some outcome of interest (e.g., an achievement

More information

One-Way ANOVA Source Table J - 1 SS B / J - 1 MS B /MS W. Pairwise Post-Hoc Comparisons of Means

One-Way ANOVA Source Table J - 1 SS B / J - 1 MS B /MS W. Pairwise Post-Hoc Comparisons of Means One-Way ANOVA Source Table ANOVA MODEL: ij = µ* + α j + ε ij H 0 : µ 1 = µ =... = µ j or H 0 : Σα j = 0 Source Sum of Squares df Mean Squares F Between Groups n j ( j - * ) J - 1 SS B / J - 1 MS B /MS

More information

Workshop Research Methods and Statistical Analysis

Workshop Research Methods and Statistical Analysis Workshop Research Methods and Statistical Analysis Session 2 Data Analysis Sandra Poeschl 08.04.2013 Page 1 Research process Research Question State of Research / Theoretical Background Design Data Collection

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

sphericity, 5-29, 5-32 residuals, 7-1 spread and level, 2-17 t test, 1-13 transformations, 2-15 violations, 1-19

sphericity, 5-29, 5-32 residuals, 7-1 spread and level, 2-17 t test, 1-13 transformations, 2-15 violations, 1-19 additive tree structure, 10-28 ADDTREE, 10-51, 10-53 EXTREE, 10-31 four point condition, 10-29 ADDTREE, 10-28, 10-51, 10-53 adjusted R 2, 8-7 ALSCAL, 10-49 ANCOVA, 9-1 assumptions, 9-5 example, 9-7 MANOVA

More information

Orthogonal, Planned and Unplanned Comparisons

Orthogonal, Planned and Unplanned Comparisons This is a chapter excerpt from Guilford Publications. Data Analysis for Experimental Design, by Richard Gonzalez Copyright 2008. 8 Orthogonal, Planned and Unplanned Comparisons 8.1 Introduction In this

More information

Introduction to Statistics with GraphPad Prism 7

Introduction to Statistics with GraphPad Prism 7 Introduction to Statistics with GraphPad Prism 7 Outline of the course Power analysis with G*Power Basic structure of a GraphPad Prism project Analysis of qualitative data Chi-square test Analysis of quantitative

More information

22s:152 Applied Linear Regression. There are a couple commonly used models for a one-way ANOVA with m groups. Chapter 8: ANOVA

22s:152 Applied Linear Regression. There are a couple commonly used models for a one-way ANOVA with m groups. Chapter 8: ANOVA 22s:152 Applied Linear Regression Chapter 8: ANOVA NOTE: We will meet in the lab on Monday October 10. One-way ANOVA Focuses on testing for differences among group means. Take random samples from each

More information

8/23/2018. One-Way ANOVA F-test. 1. Situation/hypotheses. 2. Test statistic. 3.Distribution. 4. Assumptions

8/23/2018. One-Way ANOVA F-test. 1. Situation/hypotheses. 2. Test statistic. 3.Distribution. 4. Assumptions PSY 5101: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic One-Way ANOVA F-test One factor J>2 independent samples H o :µ 1 µ 2 µ J F 3.Distribution

More information

psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests

psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests last lecture: introduction to factorial designs next lecture: factorial between-ps ANOVA II: (effect sizes and follow-up tests) 1 general

More information

B. Weaver (18-Oct-2006) MC Procedures Chapter 1: Multiple Comparison Procedures ) C (1.1)

B. Weaver (18-Oct-2006) MC Procedures Chapter 1: Multiple Comparison Procedures ) C (1.1) B. Weaver (18-Oct-2006) MC Procedures... 1 Chapter 1: Multiple Comparison Procedures 1.1 Introduction The omnibus F-test in a one-way ANOVA is a test of the null hypothesis that the population means of

More information