..,, .. -, ..,.. : - ., ISBN. Excel, SPSS Statistica. Statistica 6.0,,, OpenOffice.org Calc.
|
|
- Lenard Strickland
- 5 years ago
- Views:
Transcription
1
2 This project has been funded with support from the European Commission. This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein. -..,.., 2010
3 :..,..,,.. -,,..,.. : -., ISBN,.. MS Excel, SPSS Statistica,, -,,, ;. MS Excel 2003, SPSS 10.0 Statistica 6.0,,,. OpenOffice.org Calc.
4 : 1: (MS Excel) (SPSS) (Statistica): : : : : : : SPSS Statistica : : SPSS Statistica : : : : : : :
5 4: ( ) : : : : : SPSS : Statistica : : : :
6 1: : 1. : (,, ), (, ), (,, ), - 1 (, ),. 2. MS Excel,,. 3. SPSS:,, (,, ),,. 4. Statistica,. :, ; MS Excel ; - -, ; SPSS, ; Statistica. :, ; 1. 5
7 , ; MS Excel, SPSS, Statistica; MS Excel ;,,, MS Excel, SPSS, Statistica; ;. ( ),,,,., (. 71), 5. [28]: ) ( n<=100): K=1+3,32 lg n; ).. : K=5 lg n; ) [5,.32]: ( ) : 1+3,32 lg N : 5 lg N , ,9 8, , ,6 11,5 > (N=1000) 15 (N=1000) : 1. (X min ) (X max ). 2. R= X max X min. 6
8 3. ( ). 4. : i=r/k. 5.,,. 6.,. L= X min i/2. 7., L i, 2i, 3i. 8.,,.. ( ), Xc=L+i/2. ( ),,, ( ) ( ). 2. (N). 3. 2, 2,, ( ). 4. N N+1 ( N N ( N 1) ): R i Excel,. 7
9 .,,. [17. 16]. ( ),,.,,,.,,. ( ), -,, -,.,,..,,,,.,,.. ( ),,,, - -.,,. 8
10 .. ( Excel ), F n (x)=p, F n (x),. =0,5, =0,25, 0,5, 0,75, 1, =0,01, 0,02, 0,99, 1.,. 1 (MS Excel) 1.. (.,.194). 2. (, ). 3., -, MS Excel. : =. 4.,,,,,. 5., : S 6 6 ( 3 ), m A m E 2. n n n 3 S, n. (N) S n, : 1. n N 9
11 6., -> > ( : ) ( (.. 11)) MS Excel 2 xi xi x xi x i x 1 2 i 1 2 i 1, S, S S n n 1 n 1, ( = 1). 10. *. N ( N 1) : R i ,, 1, 1 0. =, n n 1. (,, ),,, [15,.45]. 12. :,, = (, ),, 0 1. n K ps X x i, x, p 10 n n n 2
12 ;, ; ; p s ; n ; K [15,.47; 5,.62]. : 5 (. 69). 13., =0,5 ( ).. 14.,, (, ), (0, 1, 2, 3, 4) : 3,. 16., ( ) ? , -. CTRL+Shift+Enter. 5,., >. (, ). 11
13 2 (SPSS) 1. SPSS, C:\Program Files\SPSS\spsswin.exe. 2. *.sav,, Employee data.sav, SPSS (File > Open > Data)., (,,, ), (, ). 3. educ ( ) jobcat ( ) SPSS: Analyze > Descriptives Analyze > Frequencies. 4. SPSS HTML. 5. SPSS.xls., -, File > Open > Data, -, Excel, -,. (Descriptives) SPSS., SPSS MS Excel. (Histogram). SPSS HTML. 6., Data > Sort Cases., (Sort by:) (Ascending, Descending ). 7., Transform > Rank cases.,, 1 (Assign rank 1 to smallest value) (Ties > Rank Assigned to Ties Mean 6 )
14 3 (Statistica): 1. Statistica C:\Program Files\StatSoft\STATISTICA6\statist.exe. 2. C:\ \STATISTICA6\Examples\Datasets, job_prof.sta, File > Open. 3. Statistica., Variable Specs. 4. (Statistica > Basic Statistics > Descriptive Statistics): Quick (Summary: Descriptive statistics, Frequency tables, Histograms, Bow&whisker plot)., (Workbook).. 5. Advanced (Summary: Descriptive statistics). Data > Transpose > File.. Extract as stand-alone window ( ) Add to report ( ). 6. Frequency tables Histograms Normality, (Number of intervals).. Statistica, RTF HTML. 7. SPSS. 8..xls.,.. 13
15 9. : Data > Sort Statistica, SPSS,, ( Copy Variables ), Data > Rank
16 . 1 MS Excel. MS Excel OpenOffice.org Calc (.. 188).. 2 D 2 ( 2- ) 7. > >
17 . 3 ( ), ( ).. 4,. 5 MS Excel.. 4 = (B2;$B$2:$B$11;1) = ($B$2:$B$11;B2;1) = ($B$2:$B$11;H2) = (B3;$B$2:$B$11;1) = ($B$2:$B$11;B3;1) = ($B$2:$B$11;H3) = (B4;$B$2:$B$11;1) = ($B$2:$B$11;B4;1) = ($B$2:$B$11;H4) = (B5;$B$2:$B$11;1) = ($B$2:$B$11;B5;1) = ($B$2:$B$11;H5) = (B6;$B$2:$B$11;1) = ($B$2:$B$11;B6;1) = ($B$2:$B$11;H6) = (B7;$B$2:$B$11;1) = ($B$2:$B$11;B7;1) = ($B$2:$B$11;H7). 5 1.,, -,, ? 4.?? 5.,,? 16
18 6.?? 7.?? 8.? 9.? 10.? 11.,??? 12.? 13. MS Excel, OpenOffice.org Calc, SPSS, Statistica? 14.? 15.? 16.? 17.,. 18.,? ,,? 20. MS Excel, OpenOffice.org Calc, SPSS, Statistica? 21. MS Excel, OpenOffice.org Calc, SPSS, Statistica? 22. SPSS, Statistica?? 17
19 2: : : ; ; (t- ) (F- ); ; ; ; ;. :, ; ; ; ; ; MS Excel ; SPSS Statistica.,, ( ). ( ) df=n 1 +n 2 2.,. 18
20 2 S1 - : F, S 2 2 1>S 2 2 S 2 (F )., df 1 =n 1 1, df 2 =n 2 1 (n 1 n 2 ).,. ( 0 - ), F >F, - ( ). F, (Leven s Test)., F. t- x1 x2 : t, x 1 x 2 S d, S d. 2 2 di di S n d ( d i ). n n S1 S2 S d, n1 n2, 2 n1 n 2 2 ( X x 2 1) ( X x ) 2 2 S d S, S n1 n2 n1 n2 2,., df=n 1 +n
21 1:. 1. MS Excel t ( ) ( ). 3.,,.. 4., F- ( - - ) ( > ).. 5. MS Excel: ( 1; 2) - (2 ); : k 1 =n 1 1 k 2 =n 2 1 ( n 1, n 2 ); =2 /2; F, F ( ; _ 1; _ 2); F =0,05 =0, : t- t-, F- (.. 4).. 8. t, (p, k),, k (k =n 1 +n 2 2). 9. t ( 6), 20
22 (, k, m), t-, k, m (m=1, m=2 ). 10. MS Excel :,, (1 2). SPSS. 11. SPSS Analyze > Compare Means > Independent Samples t-test. (Test Var), (Grouped Var) (1, 2 ). 12.,. 13. Statistica., ( MS Excel), Basic statistics >t-test, independent, by variables., SPSS ( + ), Basic statistics > t-test, independent, by groups. 14.,. 2:. 1. t- ( ). (. 202) t t Analyze > Compare Means >Pared-Samples T Test SPSS. 6. Statistics > Basic statistics > t-test, dependent samples Statistica. 21
23 7., t-. (, -, Extract as stand-alone window, -,, Add to report ,,? [19,.45] MS Excel ( ) -. : 0 : : :,, F 1 ( ). 1 (88,95>45,73). > > F- : F- (5%) ,21 111,5 88,95 45,
24 df F 1,94 ( ) F P(F<=f) 0,13 - F 2,761 F,. 0, F >F., 1,94<2,76,, ( ). -,. 0, - > ( ), 0, - <. =0,13, 0,05.,,., > > t-. : t- 5% , ,5 88, , , df 24 t- -1,31017 P(T<=t) 0,10127 t 1, P(T<=t) 0, t 2,
25 (.. 19)..,,. t - :.. : 0 : 1 2 ( 2 1). 1 : 1 2 ( 2 1). t. t < t,. t. : 0 : 1 2 ( x 1 x2 0 ). 1 : 1 2 ( x 1 x2 0 ).,. SPSS. : 1 :,,. 1 Group Statistics VAR00001 VAR ,00 2,00 Std. Error N Mean Std. Deviation Mean ,2143 9,4314 2, ,5000 6,7622 1,
26 2, -, (Leven s Test for Equality of Variance). -, F=0,573, (Sig. = 0,471),. t- Equal variances assumed (, ). t- (Equal variances not assumed). 2 Independent Samples Test VAR01 Equal variances assumed Equal variances not assumed Levene's Test for Equality of Variances F Sig. t df Sig. (2-tail ed) t-test for Equality of Means Mean Differen ce Std. Error Differe nce 95% Confidence Interval of the Difference Lower Upper,537,471-1,310 24,203-4,2857 3, ,0370 2,4655-1,344 23,345,192-4,2857 3, ,8755 2,3041,, t = 1,31 ( ). - = 0,203, ( ). - ( =0,102). 0. Statistica : t- Valid Valid Mean Mean df p Std.Dev. Std.Dev. F- p value N N ratio Var1 107,21 111,50-1, , ,43 6,76 1,94 0,28 2 ( t- ): 10. ( ).,. 25
27 MS Excel,,. MS Excel t-. A B C 1 X Y (. 6) : E F G t- X Y ,
28 0, df 9 t- -2, P(T<=t) 0, t 1, P(T<=t) 0, t 2, : t- (!); t t ( 0,05.. 6),,. 2,75>2,26 (t >t ), 0,. : P(T<=t) =0, , 0,05.,, ( ), ( ),,., t >t (2,75>1,83), 1, Y. SPSS t- (Analyze > Compare Means >Pared-Samples T Test) : 27
29 3 Pair 1 X Y Paired Samples Statistics Std. Error Mean N Std. Deviation Mean 3286, , , , , ,0847 Paired Samples Correlations Pair 1 X & Y N Correlation Sig. 10,772,009 Paired Samples Test 28 Paired Differences 95% Confidence Std. Std. Error Interval of the Difference Mean Deviation Mean Lower Upper t df Sig. (2-tailed) Pair 1 X - Y -144, , , , ,6738-2,753 9,022 t-. =0,01 ( - = 0,009), =0,05 ( - = 0,022). Statistica Statistics > Basic statistics > t-test, dependent samples : Mean Std.Dv. N Diff. Std.Dv. t df p X 3286, ,9177 Y 3430, , , ,4086-2, , ? 2.? 3.?? 4.?? 5.? ?
30 8.? 9.? 10. ( - )? 11.? 12.? 13.? 14.? 29
31 3: : : ; ; ; ; ; ; ; ;. :, ; ; ; MS Excel SPSS Stitistica. ( ) ( ANOVA ANalysis Of VAriance)., F-.,.,,,., ( ),, 30
32 : xij M ( M j M ) eij. ( ), j, j-, ij,.,,., 1),, ; 2),, ; 3).,. ( ) 4. k, n. F: F emp <F kr,, ( ). 4 (Between groups) (Within groups) (Total) Q Q bg= k n ( X i 1 Q wg= k n i 1 j 1 i X ) 2 df k 1 2 k (n 1) ( X ij X i ) Q total= k n 2 k n 1 ( X ij X ) i 1 j 1 S S S 2 bg 2 wg 2 total Qbg k 1 Qwg k ( n 1) Qtotal N 1 F S S 2 bg 2 wg 31
33 ,,.,, : H- - ( ), 2 r- ( ) (. 6,. 115)., Q 2 bg ( - ). Qtotal 2, (.. 43). 2 ( ), : 2 0, 2 1 [23, 5]. ( ) :,. 5. (.. 30): xijg M ( M j M ) ( M g M ) a jg eijg. ( ), j, j-, g, g-, a jg, ijg,. [23, 18]. [19, 23]. 6 (.. 120). - 32
34 ( ) Q a 2 b n a ( X X ) i* * a 1 1 ( ) Q a n b 2 b * j* b 1 1 ( X X ) Q n ( X X X ij* ** * * a b ab X ) (a 1) (b 1) Q ( X ijk X ij* a b n z ) a n (Total) Q k b total ( X ijk X ) k 2 2 a b (n 1) a b n 1 5 F a Q 2 a S a 1 S S 2 a 2 z 2 b S b b Q 1 S S 2 b 2 z ( a Q 1 )( b 2 ab S ab 1 ) S S 2 ab 2 z S 2 z 2 Q z a b ( n 1 ) Q total S total a b n 1 i i j i i j j j i j
35 1: 1. ( D E,.,.194,.203) ,. 2: SPSS Statistica 1. SPSS.,. 2. SPSS : Analyze > Compare Means > One-Way ANOVA, (Dependent List),, Factor. 3. (Options) Descriptive, Homogeneity-of-variance Means plot.,. 4. Statistica, SPSS. Statistics > ANOVA, One-Way ANOVA. (Dependent variable) ( Statistica ). 5. Summary : Cell statistics, Univariate results, All effects/graphs. 34
36 6. Statistics > Basic Statistics/Tables > Breakdown & one-way ANOVA. Quick Statistics by Groups Results Summary Table of Statistics, Analysis of Variance Interaction plots : 1. (..208). 2. (,,, ) ,. 4: SPSS Statistica 1. SPSS (, ). 2. Analyze > General Linear Model > Univariate. 3. Dependent Variable, - Fixed Factor(s). 4. Options Descriptive statistics Estimates of effect size ( ). 5. Plots, (Horisontal Axis), (Separate Lines). 6.,. 35
37 1.. '. 10. ',? [187,.41]. 6, MS Excel. 6 : 36. 7
38 0,.,. MS Excel (. 7): F =6,72. F =3,35. :,, ,19 3,019 0, ,7 3,97 1, ,97 4,997 2, % SS df MS F P- F 19, , ,672 0, , , , , (. 8) : SPSS Ststistica. 37
39 2. ( 11,. 208): (MAC Dell) (Netscape Communicator Internet Explorer). 8,, MS Excel (,, ). Netscape Communicator Internet Explorer 8 MAC Dell MAC Dell Netscape Communicator 5% ,25 300,5 218,375 61, , ,45 Internet Explorer ,25 297,5 250,375 37, , , , ,87 322,8 ( 9 10). 10 P- F SS df MS F ,76 5,3E-07 4, ,24 3,4E-21 4, ,95 1,1E-07 4, ,
40 , ( ). : 0 :. 1 :. 0 :. 1 :. 0 :. 1 : MAC Dell Netscape Communicator. 9 Internet Explorer Netscape Communicator Internet Explorer MAC Dell. 10 F, 39
41 ,. : (. 9,. 10), Dell, Mac, -. SPSS. :. browser 1 Netscape Communicator 2 Internet Explorer, computer 1 Mac 2 Dell ( 11, 12). 11 Between-Subjects Factors BROWSER COMPUTER 1,00 2,00 1,00 2,00 Value Label N NC 16 IE 16 mac 16 dell 16 40
42 Descriptive Statistics 12 Dependent Variable: T_LOAD BROWSE NC IE Total COMPUTE mac dell Total mac dell Total mac dell Total Dependent Variable: TIME Source Corrected Model Intercept BROWSER COMPUTER BROWSER * COMPUTER Error Total Corrected Total Mean Std. Deviation N 136,2500 7, , , , , ,2500 6, , , , , , , , , , , Tests of Between-Subjects Effects Type III Sum of Mean a. R Squared =,965 (Adjusted R Squared =,961) 13 Partial Eta Squares df Square F Sig. Squared ,5 a ,5 257,7,000, ,E ,000, , ,00 41,758,000, , ,2,000, , ,00 49,954,000, , , ,5 31 ( 13) 2 ( ). Corrected model,. Corrected total. Error,. 41
43 (. 12) MS Excel. 400 Estimated Marginal Means of TIME 400 Estimated Marginal Means of TIME Estimated Marginal Means ,00 BROWSER 1,00 2,00 2, ,00 COMPUTER 1,00 2,00 2,00 COMPUTER BROWSER. 12 Statistica. 1.? ( ). 5..? 6.??? ??? 42
44 4: : : Excel ; ; ; ; ; ;. :, ; ; ;, ; ;..,,.,,,,, 1 r 1. : X x Y y rxy ( )( ), s x, s y N s x s y X Y, x y, X Y ; N. 43
45 ,, ( ),. 2 (d) 8 : r 6 d 1 s 2 N ( N 1. ),. t-. r. r n 2 t ( n 2 1 r ) t- df=n ,.,,,.,,, 5 (. 86).,, Partitial Correlation.. ( ). Y X s yx sxy ( yx ) X Y ( xy ), s y sx ( yx = xy ). s x, s y 8,. 44
46 1 X Y ; s px ( y x y) n 2 yx 1 2 s xy p y ( x y x) n ; p x, p y X Y; x, y ; y x, x y ; n. - [17, 5, 18]., (.. 35, 41). 2. :,,.., X Y,, Y X,. ( ), X Y, Y X.. : X Y,,,. (r 2, 2 ),. - 2 ( m n m ) : F. 2 2 ( 1 )( m 1), m, ; n. F df 1 =m 1 df 2 =mn m [18].,, 45
47 Y=f(X)+,. Y x 1, x 2, x 3, x m. : y = a+bx 1 +cx 2 +.,, ( ), -,. R 2, Y,. 2 2 ( Y Y ) r, Y Y 2 ( Y Y ) ( Y), Y, ( ), Y =a+bx., X. Y X a b : 46 ( y y)( x x) b 2 ( x x) 2 ( y y) b r 2 ( x x) a y bx. X Y. ( )., t >t.,,.,,, F <F, 0. ( F >F ),
48 . - [14, 8, 14]. 1: 1 : MS Excel 1. X, Y, Z 9 (.,.194). 2. ( ), ( ), ( ), ( ). 3. r xy, r xz, r yz ( ). 4. : ) ; ) p- ( ): = (t, n-2, 2).,, (2 1 ). - ( =0,05 =0,01). <, 0 1,. >,,,. 5. ( ). 6.,, ( ). n : 9 ( x, y, z),,,, ( ),. 47
49 48 t n z 2 2 3, n, t, ( =0,05 t=1,96; =0,01 t=2,58); z. (. 1). 7. X Y, (15-20 ). -, (.. 10) (1; N) ( N )... ( ) : SPSS 1. Analyze > Correlate > Bivariate,, : (Pearson), - (Kendall s tau-b) (Spearman). Options. 2., Graps > Scatterplot > Simple. 3. Graps > Scatterplot > 3D. ( - SPSS Chart Object > Open 3D-Rotation ( ).
50 4. MS Excel.. 1 : Statistica 1. Statistics > Correlation Matrices. Advanced/Plot Summary: Correlation matrix ( ), Partitial correlation ( - ), Scatterplot matrix ( ), 3D scatterplots ( ). ( ).. 2. ( Statistics > Nonparametrics > Correlation (Spearman, Kendall tau, gamma) : 1. Y X X Y, () MS Excel. 2. (,, R 2 1).,. 3. Y.. 4. Y Z.. 5. SPSS (Analyze > Regression). SPSS Y Y X Z. 49
51 6. Analyze > Curve Estimation.,,. 7. Statistica (Statistics > Multiple Regression). Y Statistica. 8.,.. 1: ( ).. A B C D 1 (X) (Y) (Z) ,.. : MS Excel., 14, 14, D14 14 X Y (R xy ). 50
52 B14 C14 = (B2:B11;C2:C11) t- : =B14* (10 2)/ (1 B14^2) D14 ( ) = (C14;10 2;2) E14 = (D14<0,05;"H1";"H0") : (r) t- alpha Rxy 0, , , H1 Rxz 0, , , H1 Ryz 0, , , H1. r xy r xz =0,01. r yz =0,05. : 1% 5% = (B14) =2,58^2/$B18^2+3 =1,96^2/$B18^2+3 : n1% n5% Z(xy) 1, , , Z(xz) 1, , , Z(yz) 0, , , , n 10 ( ),,. Y Z 51
53 =0, Y. 13.,.,. Y (Y) ( ). 13 C X Y : A B C D E F G H 1 (X) (Y) (Z) Y ,5 0,5 0, ,5-4,5 20, ,5 0,5 0, ,5-0,5 0, d d^2 52
54 d 2 =37. r s = 0,776., ( ),. 14 Descriptive Statistics X Y Z Mean Std. Deviation N 36,60 7, ,10 4, ,50 1,72 10 SPSS., ( 14). 15. (*), =0,05, (**), =0,01. p-. MS Excel. 15 Correlations X Y Z Pearson Correlation Sig. (2-tailed) N Pearson Correlation Sig. (2-tailed) N Pearson Correlation Sig. (2-tailed) N X Y Z 1,000,827**,821**,,003, ,827** 1,000,718*,003,, ,821**,718* 1,000,004,019, **. Correlation is significant at the 0.01 level (2-tailed). *. Correlation is significant at the 0.05 level (2-tailed). 53
55 ( 16). Y Z. Nonparametric Correlations Correlations 16 Kendall's tau_b Spearman's rho X Y Z X Y Z Correlation Coefficient Sig. (2-tailed) N Correlation Coefficient Sig. (2-tailed) N Correlation Coefficient Sig. (2-tailed) N Correlation Coefficient Sig. (2-tailed) N Correlation Coefficient Sig. (2-tailed) N Correlation Coefficient Sig. (2-tailed) N **. Correlation is significant at the.01 level (2-tailed). X Y Z 1,000,682**,732**,,008, ,682** 1,000,458,008,, ,732**,458 1,000,005,080, ,000,772**,806**,,009, ,772** 1,000,565,009,, ,806**,565 1,000,005,089, SPSS Y X
56 (Analyze > Correlate > Partitial Correlation) Y : 17 Partitial Correlation Coefficients Controlling for.. 1,0000,5812 ( 0) ( 7) P=, =,101,5812 1,0000 ( 7) ( 0) =,101 P=, (Coefficient / (D.F.) / 2-tailed Significance) ", " is printed if a coefficient cannot be computed, (Y) (X) (Z) : r xy,z = 0,5812.,.. Statistica Statistics > Correlation Matrices : 1) : Correlations (Spreadsheet1 in Workbook1) Marked correlations are significant at p <,05000 N=10 (Casewise deletion of missing data) Var1 Var2 Var3 Variable Var1 1,00 0,83 0,82 Var2 0,83 1,00 0,72 Var3 0,82 0,72 1,00 2) (. 15) 55
57 3). Correlations (Spreadsheet1 in Workbook1 3v*10c) Var1 Var2 Var : ( ) ( ): A B ,? :, (. 16).,,,, (r ab 0). Analyze > Compare Means > Means SPSS. A, B.. Options Anova table and eta Test for linearity. 56
58 B : 1. Case Processing Summary. 2. Report,. 3. : 18 B * A Between Groups A Within Groups Total (Combined) Linearity Deviation from Linearity. 16 ANOVA Table Sum of Mean Squares df Square F Sig. 170, ,217 5,687,159,275 1,275,110, , ,484 6,194,147 5, , ,600 14, : Deviation from Linearity = 170,325 (Combined) = 170,6. 4. : Measures of Association R R Squared Eta Eta Squared B * A -,040,002,986,972 57
59 R ; R Squared ( ), ; Eta., A,,., : Measures of Association R R Squared Eta Eta Squared A * B -,040,002,793,628 Analyze > Descriptives > Crosstabs > Statistics (Eta): Directional Measures Nominal by Interval Eta A Dependent B Dependent Value,793,986 Analyze > Compare Means > Means 1 : Measures of Association Y * X R R Squared Eta Eta Squared,827,684,941,885 Measures of Association X * Y R R Squared Eta Eta Squared,827,684,951,905,,, Anova ( 19). 58
60 ANOVA Table 19 Y * X Between Groups Within Groups Total (Combined) Linearity Deviation from Linearity Sum of Mean Squares df Square F Sig. 128, ,319 2,198,348 99, ,101 11,892,075 29, ,855,583,743 16, , , % (29, ,233). 3: 1. : MS Excel ().. Y X (Y) y = 0,4339x + 7,2196 R 2 = 0, (X) (Y) ( (Y))
61 (R^2), (. 17).,,,,. (). - - (.. 11).., C14:D14. = (B2:B11; C2:C11; 1; 0). : 14=0,4339 b; D14=7,2196 a. : y=0,4339b+7,2196. (Y; X;.;.) ( ); (, );,, a ( a=0.=0);. = (B2:B11; C2:C11; 1; 1) 5 : 0,4339 7,2196 0,1043 3,8911 0,6839 2, , ,101 45,799 : b 0,4339 a 7,2196 0,1043 3,8911 b a R 2 0,6839 2, ,31 F- df ( ) 8 99,101 45,799 60
62 k+1 ( k ) : 1. : R 0, (R>0) R- 0, R- 0, , : df SS MS F F 1 99, , , , , , ,9 F F,. 3. : - t- - P- - 95% 95% Y- - 7, , , ,1006-1, ,1926 (X) 0, , , , , ,67437,, ( ). t : 61
63 - - (Y) (Y) 1 20, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , (. 18)
64
65 ,.., (. 5,. 89). ( ). SPSS Analyze > Regression > Linear. 19. : 1. : Model Summary b Model 1 Adjusted Std. Error of R R Square R Square the Estimate,827 a,684,644 2,3927 a. Predictors: (Constant), b. Dependent Variable: 2. - : ANOVA b Model 1 Regression Residual Total a. Predictors: (Constant), b. Dependent Variable: 3. : Sum of Mean Squares df Square F Sig. 360, ,018 17,310,003 a 166, , ,
66 Model 1 (Constant) Coefficients a Unstandardize d Coefficients Standardized Coefficients Std. B Error Beta t Sig.,188 8,870,021,984 1,576,379,827 4,161,003 a. Dependent Variable: MS Excel. (, ) SPSS Analyze > Regression > Curve Estimation (. 20).. 20, :, : 65
67 1. (MODEL): Dependent variable.. Method.. LINEAR Listwise Deletion of Missing Data Multiple R,82700 R Square,68392 Adjusted R Square,64441 Standard Error 2,39268 Dependent variable.. Method.. QUADRATI Listwise Deletion of Missing Data Multiple R,82825 R Square,68600 Adjusted R Square,59628 Standard Error 2,54948 :, 68% (R Square 0,68). 2. (Analysis of Variance): Dependent variable.. Dependent variable.. Method.. LINEAR Method.. QUADRATI DF Sum of Mean DF Sum of Mean Squares Square Squares Square Regression 1 99, ,1006 Regression 2 99, ,7005 Residuals 8 45,7994 5,7249 Residuals 7 45,7994 6,4999 F = 17,31038 Signif F =,0032 F = 7,64637 Signif F =,0173, : F = 99,1/5,72 = 17,31., (F = 49,7/6,49 =7,64637), ( =0,01, - = 0,0032, =0,05, - = 0,0173).,. 3. (Variables in the Equation): Dependent variable.. Method.. LINEAR Variable B SE B Beta T Sig T,4339,1043,8270 4,161,0032 (Constant) 7,2196 3,8911 1,855,
68 Dependent variable.. Method.. QUADRATI Variable B SE B Beta T Sig T,1796 1,188,342,151,88410 **2,0035,016,487,215,8359 (Constant) 11, ,329,549,5999. y=0,4339 x+7,2196. y=0,0035 x 2 +0,1796 x+11,7167. t-,,.. SPSS, Observed Linear Quadratic Statistica Graphs > 2D Scatterplots (. 22). 1.? 2.? 3.?
69 5.? 6.,? 7.? 8.? 9.? 10.?? 11.? 12. -?
70 5: : MS Excel, SPSS Statistica : ; -, ; ; MS Excel ;,, ,. :, ; ; ;, ; ; -. 69
71 ,,,,,,,,.., (. 6,. 112). (, ),, (,, )....,, A E., t A 3 t E 3, m m 6 6 m A m E 2 n n,,,,... [25,. 155], ( ). A <A, <E, A 6 ( n 1), A 3 ( n 1) ( n 3) 24 n ( n 2) ( n 3) E 5 2 ( n 2) ( n 3) ( n 5) A E 70
72 , : A 3 D( A ) E 5 D( E ).. (n 20).,,., [8]..., -, [8, 3].,,.,,,,,,,..,.,.,, (, ) ( ). H 0 : F(x)=F(x, ), F(x, ),, ( ). 71
73 H 0 : F ( x ) F( x, ),,. ˆ,. ˆ,.,, ( ) ,.,.. [3] : x i x i+1 xi x xi x zi zi 1 1 ( x S S, S ). X (x i, x i+1 ): Pi ( zi 1 ) ( zi ).,, : n i =N P i. N. N * i. n i ( zi ), i S [5]. 72
74 - -, ( ). -. (n<20) (, ), d emp d kr. d max n ( d ) n,. d ( 20, 21). 20 d max [19], d emp d kr. n p=0,05 p=0,01 n p=0,05 p=0,01 5 0,6074 0, ,1921 0, ,4295 0, ,1753 0, ,3507 0, ,1623 0, ,3037 0, ,1518 0, ,2716 0, , ,248 0, , ,2147 0,2574 >100 1,36/ n 1,63/ n 21 1-p [25] p 0,30 0,25 0,20 0,15 0,10 0,05 0,02 0,01 0,97 1,02 1,07 1,14 1,22 1,36 1,52 1,63,,, : p=0,2 p=0,3 [8, 25]. ( p-,., p<0,2, =0,2., 73
75 p>0,2 ( 0,2 p- ).) (,,, 10., D * 0 85 d n 0, 01. n D * : 22 0,15 0,10 0,05 0,03 0,01 D * 0,775 0,819 0,895 0,955 1,035 - n1 n2 d max, d max = n1 n2 ; n 1 n ,. 2 -,. 10 -,.: II < 74
76 - - (, ). -,. 2 -.,.. (..72). : 2 k i 1 ( f emp 2 f teor ) f teor, k ( ), f emp f teor,. df=k r 1, k, r.,, ( ) x S ( ). r=2, df=k 3.,.,,, ( ) (. 4,. 108). df=(k 1)(m 1), k ( ), m. 75
77 2. - 2,. -,, 5.. -, 2,,.. [5]. 2 - ( ).,,, 4,. ( - ) [22, 5, 18]: 2 2 ad cb : Q ; ad cb ad cb : ; ( a b )( d )( b d )( a c ) mxn 2 - : ; 2 N 2 V, k N ( k 1) ; 2 T, c k N ( c 1) ( k 1). N, 2,. 76
78 , 2,, : (0.05, 0.10,..., 0.95) > P k = (, ), =0.05, > Z k = ( ). 3. (P k, Z k ). 4.., (P k, Z k ) ,. 8. SPSS (Graphs > Q-Q Plots) (Graphs > P-P Plots)., - SPSS,. MS Excel. 9. SPSS Analyze > Descriptive Statistics > Explore: (Dependent List).,, (Factor List). Display Both,,. 77
79 (Stem-and-Leaf Plot). (, ) ( ). (Box Plot),. ( ). Normality plots with tests -. (Sig.) 0,05,. 10.,,. Data > Select Cases If condition 11. filter_$., filter_$= Statistica: Graphs > 2d Graphs > Normal Probability Plot > Quantile-Quantile Plot > Probability-Probability Plot. 12. Statistica, Statistica > Basic Statistics/Tables > Frequency Tables. Normality Frequency Tables 11 Data > Select Cases. Random sample of cases. 78
80 Kolmogorov-Smirnov test Test for normality. 13.., :. 2 : ( ): ) : A B C D 1 X = ( 2,, S, 0) =C2*N*k = ( 3,, S, 0) =C3*N*k 79
81 .. S ( ). 0 ( ) ( ), 1 ( ) ( )., 0.,. D, N, k. k=1 ( ). 1. ) : A B C D E F G 1 X i X i+1 Z i Z i+1 (Z i ) (Z i+1 ) ( (Z i+1 )- (Z i ))*N X i ; X i+1 ; Z i Z i+1. Z i = (X i ; ; ). Z i Z i+1 + ( ). (Z i ) (Z i+1 ). z 1 z 2 F( z ) e 2 2 dz (F(z)= (z)), 80
82 1 F( z ) ( z ), 2. (X i,, S, 1), ( Z i Z i+1 ). ( N), ( ) : ) 2 ( 2 2 ); ) - - (, ). 4.,. : : -,,,,, -, -,,. : (), (), F (), (), (), (), (), (), (), (), 2 : ( )
83 ,. 4., : ( ) SPSS C. 2. ) Transform >Count,. Count (Target Variable), Numeric Variables (Define Value), Range k through L ( k, L,,, 10, ) Range Lowest through L (, L). (Add) Values to Count. 2. ) Transform > Recode > Into Different Variables. Recode (Input Variables), (Output Variable), Old and New Values, Count. (Add) Old > New : Analyze Nonparametric Tests Chi-Square (All categories equal) : Graphs > Bar Charts > Simple. 82
84 6. ( ) Analyze > Nonparametric > 1-Sample K-S ( - ). Analyze > Descriptive Statistics > Explore. : [8], Nonparametric > 1-Sample K-S : p-,,, ( ). 7.. MS Excel. 2 : ( ) Statistica: 1. Statistica Statistica > Distribution Fitting. Distribution Fitting. (Continuous Distribution) (Normal), (Rectangular), (Exponential, Gamma, Log-normal, Chi-square, Weibull, Gompertz). (Descrete Distribution),,. 2. Fitting Continuous Distributions (Variable) (Distribution). Parameters ( ) (Numbers of categories), (Lower limit) (Upper limit). :, 1 (. 6). 3. Options,. 24: Combine Categiries ( ) Chi-Square test, 83
85 Yes (continuous) Kolmogorov- Smirnov test,, - (Frequency distribution) (Raw frequencyes) Graph. 4. Summary Plot of observed and expected distribution Quick :. 1. ( () ( ; )) (50-80 ): 1 ( ), 2 (0-5 ), 3 (1-3 ). : ). 84
86 2. 2: > , 3 >. 2 ( ), 3 ( ).. :!, SPSS 5. 1, 2, (Row) 3 (Column). Analyze > Descriptive Statistics > Crosstabs. 2 - V (. 4), Crosstabs >Statistics. 7.. Graphs > Bar Charts > Clustered. (Category Axis) 2, 3 (Define Clusters by) MS Excel
87 4: ( ). 1. MS Excel (.. 211), SPSS GSS93 subset.sav,.. ( / ) ( gunlaw). (sex), (marital) (relig). Analyze > Descriptive > Crosstabs (Row) gunlaw ( ), ( sex, relig, marital ) (Column). Statistics Chi-Square Phi and Kramer s V. Cells Counts Observed Expected ( )
88 K (Y;k) ( ) 122 0,05 139,6-1, ,1 148,8-1, ,15 152,4-1, ,2 153,4-0, , , , , ,35 155,6-0, ,4 157,6-0, ,45 160,6-0, , , ,55 163,8 0, ,6 166,4 0, ,65 168,4 0, ,7 169,8 0, , , ,8 175,4 0, ,85 177,2 1,04 0,9 179,2 1,28 0,95 183,6 1,64 2,00 1,50 1,00 0,50 0,00-0,50-1,00-1,50-2,
89 . 26
90 1: Y, 17- [8,. 104].,, Y?. (. 25) Y. ( ) 20. (,, 5, 10, 20 ). ( ) (Y, K), Y Y,., /(1 ).,, ( )...,. ( > ).,,. ( ). Y -. SPSS. (. 26), Analyze > Descriptive Statistics > Explore Plots. : 1. : 89
91 Y Case Processing Summary Cases Valid Missing Total N Percent N Percent N Percent ,0% 0,0% ,0% 2. ( - ) - ( 50 ). Tests of Normality Y Kolmogorov-Smirnov a Shapiro-Wilk Statistic df Sig. Statistic df Sig.,142 17,200*,966 17,713 *. This is a lower bound of the true significance. a. Lilliefors Significance Correction p- (Sig.). =0,2.,. 3. C (,. 188): Descriptives Y Mean 95% Confidence Interval for Mean Lower Bound Upper Bound Statistic Std. Error 162,5294 3, , ,9447 5% Trimmed Mean Median Variance Std. Deviation Minimum Maximum Range Interquartile Range Skewness Kurtosis 163, , ,890 16, ,00 194,00 72,00 20,5000 -,499,550 1,479 1,063 90
92 4. ( ). Histogram Frequency ,0 130,0 140,0 150,0 160,0 170,0 180,0 190,0 Std. Dev = 16,37 Mean = 162,5 N = 17,00 Y 5.. Stem-and-Leaf Plots Y Stem-and-Leaf Plot Frequency Stem & Leaf 1,00 Extremes (=<122) 1, , , , , , Stem width: 10,00 Each leaf: 1 case(s) , , : 14, 15..,. 17 3, 6 8, ,
93 6.. 2,0 Normal Q-Q Plot of Y 1,5 1,0,5 0,0 Expected Normal -,5-1,0-1,5-2, Observed Value, Y.,. 7.. Detrended Normal Q-Q Plots,4 Detrended Normal Q-Q Plot of Y,2 0,0 -,2 -,4 Dev from Normal -,6 -,8-1, Observed Value. (Y=0).
94 8. (Box Plot): N =, , 122 (, ). 17 Y Statistica : Tests of Normality (Spreadsheet1) N max D K-S Variable p Var2 17 0, p >.20 2:, -,. 52%, 20,, 5., 93
95 ?. : 0 : =0,52 ( =0,52, ). 1 : 0,52 ( =0,52, )., : 20*0,52=10,4; 20 10,4=9, df=2 1=1,. ( )., 0,5. R R Q.. f f R 0,5 R^2 Q/f 5 10,4 5,4 4,9 24 2, ,6 5,4 4,9 24 2, = 4,81 df=1 2 =4,81. 2 (, _ ) ,05= 2 (0,05; 1)=3, ,01= 2 (0,01; 1)=6,63.,. =0,05. 2 (, _ ) -.,,. =0,03. =0,05,, 94
96 . ( =0,01). MS Excel 2 ( ; ),,., - =0,016 0,03.,, : =0,05. -, SPSS Analyze > Nonparametric Tests > Chi-Square 1, 2, ( 20 ). Chi-Square Test (. 27) 95
97 (Expected Values). (All categories equal). (1 2). : Observed NExpected N Residual 5 10,4-5,4 15 9,6 5,4 Total 20 Test Statistics Chi-Square 5,841 df 1 Asymp. Sig.,016 a 0 cells (,0%) have expected frequencies less than 5. The minimum expected cell frequency is 9,6.,., MS Excel. ( ), 5,,. 2 : 53.,,? =164,4 S=5,14. 96
98 =5 14. (152) (181) A B C D E 1 2 X i X i , , , , , , ,4 11 5,14 D ( ) : =( (A3;$A$10;$A$11;1) (B3;$A$10;$A$11;1))*$A$9 2, 5. : , , , , ,8 0, , , , , , = 5, = 0, = 0, ( f e f t f t ) 2 d
99 2 ( 2 = 5,138095) 2 -, df=3., (df=4 3=1). 2 0,05 = 3, ,01 = 6, ,, ( 23, ). d max =0,11191.,, D *. 0, p<0,1 ( Explore SPSS Distribution Fitting Statistica). p- ( ). 2 : SPSS. -.. Analyze > Nonparametric Tests > 1- Sample K-S, :, (Uniform),. : 98
100 1) : One-Sample Kolmogorov-Smirnov Test 2 N Uniform Parameters a,b Most Extreme Differences Minimum Maximum Absolute Positive Negative Kolmogorov-Smirnov Z Asymp. Sig. (2-tailed) a. Test distribution is Uniform. b. Calculated from data. 2) : One-Sample Kolmogorov-Smirnov Test N Normal Parameters a,b Most Extreme Differences Kolmogorov-Smirnov Z Asymp. Sig. (2-tailed) a. Test distribution is Normal. Mean Std. Deviation Absolute Positive Negative VAR ,00 178,00,234,234 -,176 1,706,006 VAR ,3774 5,1374,116,116 -,111,846,472 b. Calculated from data. - = 1,706, - = 0,006.,. - = 0,846, - = 0,472. Explore (.. 90) (..74),. Analyze > Nonparametric Tests > Chi-square Test,,
101 ( ). ( Transform > Recode. 28) ,,, Chi-square Test. 6 : V2_RECOD Observed N Expected N Residual 1,00 3 2,9,1 2, ,6 -,6 3, ,3 6,7 Test Statistics: 4, ,5-6,5 V2_RECOD 5,00 5 4,8,2 Chi-Square a 5,277 6,00 1 1,0,0 df 5 Total 53 Asymp. Sig.,383
102 a 3 cells (50,0%) have expected frequencies less than 5. The minimum expected cell frequency is 1,0. : V2_RECOD Observed N Expected N Residual 2, ,5 -,5 3, ,3 6,7 Test Statistics 4, ,5-6,5 V2_RECOD,00 6 5,8,2 Chi-Square a 5,257 Total 53 df 3 Asymp. Sig.,154 a 0 cells (,0%) have expected frequencies less than 5. The minimum expected cell frequency is 5,8. Analyze > Descriptive Statistics > Explore : Tests of Normality VAR00002 Kolmogorov-Smirnov a Statistic df Sig.,116 53,072 a. Lilliefors Significance Correction (. 29). 3 Normal Q-Q Plot of VAR00002,6 Detrended Normal Q-Q Plot of VAR ,4 1,2 0 Expected Normal Dev from Normal 0,0 -,2 -, Observed Value Observed Value
103 2. Statistics > Distribution Fitting Statistica ( : p=n.s., not specified ). 18 Variable: NewVar, Distribution: Normal Kolmogorov-Smirnov d = 0,11617, p = n.s., Lilliefors p < 0,10 Chi-Square test = 10,42295, df = 4 (adjusted), p = 0, No. of observations Category (upper limits) (. 31), 2,., (. 32). 102
104 Variable: NewVar, Distribution: Normal Kolmogorov-Smirnov d = 0,11617, p = n.s., Lilliefors p < 0,10 Chi-Square test = 5,62148, df = 3, p = 0, No. of observations , , , , , , ,8333 Category (upper limits)
105 , Chi-Square Test Statistics > Nonparametric Statistics > Observed versus expected X2. ( ), (.. 33).. 33 (Summary) Observed vs. Expected Frequencies (Spreadsheet5r) Chi-Square = 51,37500 df = 5 p <, NOTE: Unequal sums of obs. & exp. frequencies observed expected O - E (O-E)**2 Case E T /E 104 C: 1 C: 2 C: 3 C: 4 C: 5 C: 6 Sum 3, , , , , , , , , , , , , , , , , , , , , , , , , , , , : : 1= ( ()*100+50;0); 2= ( ()*2+1;0); 3= ( ()+1;0).
106 ( , 2 1, 2, 3; 3 1, 2),,,. A2:C26, E2:G26. J1.,, K10:M13. K16:L18 : =$M10*K$13/$M$13 =$M10*L$13/$M$13 =$M11*K$13/$M$13 =$M11*L$13/$M$13 =$M12*K$13/$M$13 =$M12*L$13/$M$ L23 = 2 (K10:L12;K16:L18) p- 2, K23 = 2 (L23;2).,, 105
107 2 ( 3),., (0,14122), 15. SPSS,.. 35 Crosstabs. 25 (Expected) ,,. 106
108 A2 * A3 Crosstabulation 25 A2 Total 1,00 2,00 3,00 Count Expected Count Count Expected Count Count Expected Count Count Expected Count Chi-Square Tests A3 1,00 2,00 Total ,6 3,4 6, ,2 7,8 14, ,2 2,8 5, ,0 14,0 25,0 26 Pearson Chi-Square Likelihood Ratio Linear-by-Linear Association N of Valid Cases Asymp. Sig. Value df (2-sided),997 a 2,607,999 2,607,069 1,793 a. 4 cells (66,7%) have expected count less than 5. The minimum expected count is 2, A3 Count 0 1,00 2,00 3,00 1,00 2,00 A2. 36 Statistica Statistics > Basic Statistics/Tables > Tables and banners. 107
109 Crosstabulation Specify tables (select variables) - ( 2 3),. Crosstabulation Tables Results Options,,, Pearson Chi-square Cramer's V. Advanced: Summary ( 27), Detailed two-way tables ( 28),. A Totals 2-Way Summary Table: Observed F Marked cells have counts > 10 A3 A3 Row 1 2 Totals Statistic Pearson Chi-square M-L Chi-square Phi Contingency coefficient Cramer's V 28 Statistics: A3(2) x A2(3) (Spreadsheet5r) Chi-square df p, df=2 p=,60738, df=2 p=,60671, , , :,, ,, 112.,? ( )? : 108
110 : =. 3 =D3-B3 4.,,, (,,, ) ( 124:1876)... : =$D3*B$5/$D$5 =$D3*C$5/$D$5 = (G3:H3) =$D4*B$5/$D$5 =$D4*C$5/$D$5 = (G4:H4) = (G3:G4) = (H3:H4) = (I3:I4) : , ( ), : B C D f e f t 0, 5 2. f e, f t f t 2, 0,5 (Yate), df = 1 [ ]. 2 =84,
111 V = 0,145., 2, 0. = 2 (B3:C4;G3:H4) 1,82165E-20., 2, 1, ( 0,05 0,01 ). ( 0 ) :.,,. MS Excel,,,, 0., SPSS Statistica 4,, (. 3). Statistics > Nonparametric Statistics > Observed versus expected X2 Statistica,,,
112 Statistica 2 2 ( ). Statistica.> Nonparametric Statistics > 2 x 2 Tables : 29 2 x 2 Table (Spreadsheet2) Column 1 Column 2 Row Totals Frequencies, row 1 Percent of total Frequencies, row 2 Percent of total Column totals Percent of total Chi-square (df=1) V-square (df=1) Yates corrected Chi-square Phi-square Fisher exact p, one-tailed two-tailed McNemar Chi-square (A/D) Chi-square (B/C) ,600% 44,400% 50,000% ,600% 49,400% 50,000% ,200% 93,800% 85,98 p=0, ,93 p=0, ,26 p=0,0000, ,02 p=0, ,69 p=0, ? ( )? 2.,? 3. ( / )? 4. -? 5.?,? 6.?? 7.???
113 6: : : ; ;, ; ; ;. :, ; ; ;, SPSS Statistica.,,. : 1) (, ); 2), ; 3) ; 4).,,. -, 112
114 : ( ) ) 2 U - t- ) 3 S ; H ( ) ) 2 W- t- ; ) 3 G- ; - 2 x ; L-. 3. ) 2 - ; - - ) 2 - ; - - ; 4. ) rs - r xy - ; ) ( ) ) S- ) ; L-. 113
115 .,. (2 ),. G- :. ( ). W-.,.,,, : 0 :. 1 :. 1: G ,, ( ). 4.. G. 5. n G. 6. G >G,,
116 W-,, ( ). ( / ),. ( 5 50 ). : 0 :. 1 :. (, :, ). 2. W , ,. 6. (W). 7. n W. 8. W W, 0 :. ( ) 2 r,,. 115
117 , 3-, r 2 - df = m 1. m>3 n 9, m =4 n 4. : 0 :,. 1 :,, r 1. -,, ( ).. 4. : r T j n n m c m,, n, j r r r,, 0. L-, (n 12, m 6)..,. 116
118 : 0 :. 1 :. 4. L- 1. -,,,.. ( ) ( ).. 4. ( ). 5. L : L= (T j j), j ( ), T j ( ). 6. L. L L,, 0. U- -.,.,. n 1, n 2 > [19].,, 2,. : 0 : : : U
119 2.,, 1, (n 1 +n 2 ) ( ). 5. U : nx ( nx 1) U ( n1 n2 ) Tx, n 1 2 1, n 2 2,, n x. 6. U. U > U 0,05, 0. U,. ( ) H- - U- - :,...,.,. 2 -, 2 - df = m 1, m. 3 ( 2). : 0 :. 1 : ( ). 6: H
120 2.,, 1, (n 1 +n 2 + +n m ) ( ). 5. H : 2 12 T j H 3 ( N 1), N N( N 1) n j, n j j-, j j n j >5, H 2 - df=m 1 (m ),.,. S-., S-,,,. :,, 16. : 0 :. 16 MS Excel, SPSS (Transform > Compute) RND(UNIFORM(N)) Statistica (Function) Rnd(x) Uniform(x)
121 1 :. 7: S- 1., ( ). 4. ( ), (S i ). 5. S i : A= S i, m( m 1) 2 : B n, 2 m, n. 6. : S = 2 A B. 7..,. 120 [8, 19, 21]. ( ) (, ),.. xij M ( M j M ) Pi a ji eij. ( ), j, j-, -, a ji, ij, j- -.
122 : xij M Pj Pg Pi a jg a ji a gi a jgi eij. P j, P g a jg ; a ji, a gi, a jgi. 1),,, ; 2) F- (compound symmetry),, (, ).. -., -,,. F- ( ). (Mauchly)., -. (epsilon ajustment) F-., -, ( ). -, 121
123 . - (Box s M-test). 1: 1. (U- - ) (. 1 2,. 20) SPSS Statistica ( ) ( ) SPSS Statistica : 1. ( - ) 3 ( ) SPSS Statistica ( 2 r ) :. 1 2 (.. 22). MS Excel ( 2 - ), ,, U,. 122
124 SPSS Statistica (, ),. 123
125 SPSS IQ GROUP, Analyze > Nonparametric Tests > 2 Independent Samples,. 39. : Ranks, Test Statistics. Ranks GROUP 1,00 2,00 Total N Mean Rank Sum of Ranks N Mean Rank Sum of Ranks N IQ 14 11,79 165, ,50 186,00 26 Test Statistics b Mann-Whitney U Wilcoxon W Z Asymp. Sig. (2-tailed) Exact Sig. [2*(1-tailed Sig.)] a. Not corrected for ties. b. Grouping Variable: GROUP IQ 60, ,000-1,237,216,231 a, U. - (Asymp. Sig) ( ). Statistica Statistics > Nonparametric Statistics > Comparing two independent samples. Variables (IQ) (group)., SPSS: 124 Mann-Whitney U Test (Spreadsheet1) By variable group Marked tests are significant at p <,05000 Rank Sum Group1 165,0000 Rank Sum Group2 186,0000 U 60,00000 Z -1,23443 p-level 0, Z adjusted -1,23739 p-level 0, Valid N Group1 14 Valid N Group2 12 2*1sided exact p 0,231155
126 20 U, z- -. ( 20) (2*1 sided exact p). 1 U-.,.,. Statistica., :, ( ). 2: ( ). 2 2 (.. 22)
127 G- W- MS Excel.. 40 (X) (Y), D. D19 ( = (D8:D17;"<>0")). D18 G-,,, = (D8:D17;"<0"). E,, ( D). 20 W-,,,,. = (D8:D17;"<0";E8:E17).,.. 41 SPSS Analyze > Nonparametric Tests > 2 Related Samples. Test Pair(s) List; (. 41). G- (Sign Test) ( 31), -, 126
128 (, 0,05),. 31 Y - X Frequencies Negative Differences a Positive Differences b Ties c Total N Test Statistics b Exact Sig. (2-tailed) Y - X,180 a a. Binomial distribution used. b. Sign Test a. Y < X b. Y > X c. X = Y W- (Wilcoxon Signed Ranks Test),.,. z- - 0,028, = 0, Negative Ranks Positive Ranks Ties Total a. Y < X b. Y > X c. X = Y Ranks Y - X Sum Mean of N Rank Ranks 2 a 2,00 4,00 7 b 5,86 41,00 1 c 10 Test Statistics b Z Asymp. Sig. (2-tailed) Y - X -2,203 a,028 a. Based on negative ranks. b. Wilcoxon Signed Ranks Test G-, W- t-,,, P w > P t (0,028 > 0,022),,., 127
129 ,. Statistica Statistics > Nonparametric Statistics > Comparing two dependent samples. (. 42) (Sign test 33), (Wilcoxon matched pairs test 34).,, SPSS Sign Test (Spreadsheet1) Marked tests are significant at p <,05000 No. of Percent Z p-level Pair of Variables Non-ties v < V X & Y 9 77, , ,
130 34 Wilcoxon Matched Pairs Test (Spreadsheet1) Marked tests are significant at p <,05000 Valid T Z p-level Pair of Variables N X & Y 10 4, , , : 1 3 (. 36).,., 1, 2, 3.. SPSS Analyze > Nonparametric Tests > K Independent Samples , (9,809) (0,007),,. 129
131 Ranks Test Statistics a,b 1,00 2,00 3,00 Mean N Rank 10 9, , ,40 Chi-Square df Asymp. Sig. 9,809 2,007 a. Total Kruskal Wallis Test 30 b. Grouping Variable:.,. 35.,, -,,,., ( j <P H ),,. Jonckheere-Terpstra Test a 35 Number of Levels in N Observed J-T Statistic Mean J-T Statistic Std. Deviation of J-T Statistic Std. J-T Statistic Asymp. Sig. (2-tailed) , ,000 26,300 3,384,001 a. Grouping Variable: Statistica Statistics > Nonparametric Statistics > 130
132 Comparing multiple indep. samples (groups). (Summary: Kruskal-Wallis ANOVA & Median Test) ( 36). SPSS. 36 Kruskal-Wallis ANOVA by Ranks; (Spreadsheet1) Independent (grouping) variable: Kruskal-Wallis test: H ( 2, N= 30) =9, p =,0074 Depend.: Code Valid N Sum of Ranks Grp ,0000 Grp ,0000 Grp ,0000 (Box & whisker). 45. SPSS,. -. Statistica. 131
133 Boxplot by Group 7,0 Variable: 6,5 6,0 5,5 5,0 4,5 4,0 3,5 3,0 2,5 2, Mean ±SE ±SD : ( ). 9,,. 5 : 2 3.,???
134 : 0 :,,. 1:,, r MS Excel. H14, ( 3), H15 -, = 2 (H14;4).,.. 46 Analyze > Nonparametric Tests > K Related Samples SPSS Statistics > Nonparametrics > Comparing multiple dep. samples (variables). W ( ), 0 1. W 1 ( W = 0,876). 38, SPSS. 133
135 Friedman Test Kendall's W Test 38 Test Statistics a N 9 Chi-Square 31,545 df 4 Asymp. Sig.,000 a. Friedman Test Test Statistics N Kendall's W a Chi-Square df Asymp. Sig. 9,876 31,545 4,000 a. Kendall's Coefficient of Concordance 39, Statistica, Friedman ANOVA and Kendall Coeff. of Concordance (Spreadsheet1) ANOVA Chi Sqr. (N = 9, df = 4) = 31,54545 p <,00000 Coeff. of Concordance =,87626 Aver. rank r =,86080 Average Sum of Mean Std.Dev. Variable Rank Ranks week1 4, , , , week2 4, , , , week3 2, , , , week4 1, , , , week5 1, , , ,116363
136 4 : 4. : 0( ) : ( ). 1( ) :. 0( ) : ( ) (, ). 1( ) :. MS Excel,, (. 120)., , MS Excel, 135
137 ( B2:F11, ),,.,, ( 40).,,. 48, , , , , , , , ,94444 SS df MS F P- F 2449, ,3 34,1537 2,08E-12 2, , , , ( ), ( 3:F11, ). 41,
138 . 49,., 40 41, ,6 66, ,4 60, ,2 39, ,4 61, ,8 121, , ,8 100, ,8 55, ,2 70,2 SS df MS F P- F 486, , , , , , , , (. 120). Q total = 3166,31, Q A = 2449,2, Q I = 486,71., : Q Z =Q total Q A Q I. : F ( ) Q m 1 Q n 1 Q z (m-1)(n-1) Q total mn 1 S S S 2 A 2 I 2 z QA m 1 QI n 1 ( m Q z 1)( n 1) S FA S S FI S 2 A 2 z 2 I 2 z 137
139 : Q df S 2 F P F 0, , ,30 85,042 1,39E-16 2, , ,84 8,450 7,13E-07 2, , , , ,96 ( ),.. SPSS Analyze > General Linear Model > Repeated Measures., (. 50), (Within-Subject Factor Name) (, week). Add.. 50 (Repeated Measures), Define,. 1 week1 (. 51). (Model). Full Factorial. (Plots) 138
140 (Options),, (Estimates of effect size).. 51 Tests of Within Subjects Effects ) 42). Q A Q z ( Sum of Squares), F A. 42 Tests of Within-Subjects Effects Source WEEK Error (WEEK) Sphericity Assumed Greenhouse-Geisser Huynh-Feldt Lower-bound Sphericity Assumed Greenhouse-Geisser Huynh-Feldt Lower-bound Type III Sum of Measure MEASURE_1 Mean Parti al Eta Squ Squares df Square F Sig. ared 2449, ,300 85,0,000, ,200 2, ,577 85,0,000, ,200 4, ,300 85,0,000, ,200 1, ,200 85,0,000, , , ,400 21,903 10, ,400 32,000 7, ,400 8,000 28, Measure: MEASURE_1 Transformed Variable: Average Source Intercept Error Tests of Between-Subjects Effects Partial Type III Sum Eta of Squares df Mean Square F Sig. Squared 7893, , ,7,000, , ,
141 Tests of Between-Subjects Effects ( 43), (Q I S 2 I). (Univariate approach) -,. (Mauchly s Test of Sphericity).,, (Epsilon Corrected). (Sig. = 0,537), ( 44). 44 Measure: MEASURE_1 Mauchly's Test of Sphericity b Within Subjects Effect Mauchly's W Approx. Chi-Square df Sig. Greenhous e-geisser Epsilon a Huynh- Feldt Lowerbound WEEK,282 8,114 9,537,684 1,000,250 Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is proportional to an identity matrix. a. May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed in the Tests of Within-Subjects Effects table. b. Design: Intercept Within Subjects Design: WEEK ( ). (Linear), ( 2 =0,96). 140
142 45 Measure: MEASURE_1 Source WEEK Error (WEEK) WEEK Linear Quadratic Cubic Order 4 Linear Quadratic Cubic Order 4 Tests of Within-Subjects Contrasts Type III Sum of Mean Partial Eta Squares df Square F Sig. Squared 2016, ,4 190,2,000,960 89, ,175 12,011,008, , ,711 53,918,000,871 86, ,914 14,451,005,644 84, ,600 59, ,425 38, ,761 48, ,014, Graph > Line, Multiple Values of Individual Cases.,. 52,, ,. 48, Data > Transpose. Statistica ( ) Statistics > ANOVA > Repeated measures ANOVA. 141
143 Statistics > General Linear/ Nonlinear Models > General Linear Models > Repeated measures ANOVA WEEK1 WEEK2 10 WEEK3 Value WEEK4 WEEK5 PATIENT. 53 ( week1-week5 ) (, SPSS, ). WEEK (Within effects) (. 54).. 54 Summary GLM results (. 55) : 142
144 1. All effects/ Graphs, : 30 WEEK; LS Means Current effect: F(4, 32)=85,042, p=,00000 Effective hypothesis decomposition Vertical bars denote 0,95 confidence intervals DV_ week1 week2 week3 week4 week5 WEEK 2. Sphericity : Mauchley Sphericity Test (Spreadsheet1) Sigma-restricted parameterization Effective hypothesis decomposition W Chi-Sqr. df p Effect WEEK 0, , , All effects : Repeated Measures Analysis of Variance (Spreadsheet1) Sigma-restricted parameterization Effect Intercept Error WEEK Error Effective hypothesis decomposition SS Degr. of MS F p Freedom 7893, , ,7474 0, , , , ,300 85,0417 0, , ,200 1.?? ?
145 5.. ( )? 6.?. 7.? ( )? 10.? 11.? 144
146 7: : : ; ; ; ;, ; ; ;. :, ; ( ); SPSS STATISTICA; ; ;.., ( ),,.,, [8]. 145
147 :,. : 1. ( ).,. 2. ( ). 3. ( ). ( ) : 1. (, ),,. 2., ( ),,. : 1., (,, ). 2., ( ), (, ).,, ( ) (, ),.,.,. 146
148 g, p, n i i, N, : : g 2; : n i 2;,, 2: 0 < p < (N 2); ; ; ; -., p-.,,.,.,,. h- (g 1)-.,.,, ( ).. : f km =u 0 +u 1 X 1km +u 2 X 2km + +u p X pkm, f km m- k; X ikm X i m- k; u i. 147
149 , : 2 D ( X G k ) ( N g ) p p i 1 j 1 a ( X X ij i ik )( X j X jk ), D 2 (X G k ) k., -, 2 -. :,.,.. - g 1, : i k 11, k i,, i -, g., ( ),. 1,. -, 2 2 p g (p k)(g k 1): N 1 ln k
150 (0,05 0,01), ;,,. : Pr(X G k ),,.. - 1,,,. (, ).,, Pr( X Gk ) Pr( Gk X ) g. Pr( X G ) i 1 i 1.,. ( )..., 50%, 25%. =60% ( ),. 149
151 nc - : N g i 1 g i 1 p n i p n i i i, n c, p i g. p i n i i 1, , =0,93, 93%, ( 1 14). -,,.,,.,.,,,.... 0,. 150
152 F- ( ).. F-. F- F-,. [24, 9, 10, 12]. 1: SPSS 1. SPSS, (, cars.sav, world95.sav). 2. SPSS Analyse >Classify >Discriminant. 3., (Grouping Variable).. 4.,, (Independent Variables). 5. (Use stepwise method). 6., Method, Wilk s lambda, F- (Entry) F- (Removal). 7., Statistics,, (Means) (Univariate ANOVAs), Unstandardized Function Coefficients ( ). 8., Classify,, 151
153 (Prior probabilities: Compute of group size); (Display: Casewise results) (Display: Summary table); (Plots: Combined groups)., Territorial map :????. 2: Statistica 1. Statistica : Statistics > Multivariate Exploratory Techniques > Discriminant Function Analysis. (Grouping) (Independent). (Advanced options (stepwise analysis)). 2. (Model Definition) Advanced (.. 60): ) Forward stepwise F- (, F- ). Standart. Backward stepwise F-. 152
154 ),. ) F- F- ( F to enter > F to remove ). ) (,, F / ). ) (At each step) (Summary only). 3. (Discriminant Function Analysis Results)., SPSS.. ( 46)
155 (, )., 14,,,,,,., ( ),,,. 1. SPSS,, ( v1, v2, v3, v4, v5, v6, v7 v8). Discriminant Analysis Classification Statistics Method. 56,. 57,
156
157 : 1. (Tests of Equality of Group Means). 2. (Pooled Within-Groups Matrices). 3. /,. - ( 47). 16. (, F- / ).. 47 Variables Entered/Removed a,b,c,d Step 1 2 Entered Statistic df1 df2 df3 Wilks' Lambda, ,000 17, ,000,001, ,000 10, ,000,000 Statistic df1 Exact F At each step, the variable that minimizes the overall Wilks' Lambda is entered. a. Maximum number of steps is 16. b. Minimum partial F to enter is c. Maximum partial F to remove is d. F level, tolerance, or VIN insufficient for further computation. df2 Sig. 4. Variables in the Analysis ( ), F
158 Step 1 2 Variables in the Analysis Wilks' Tolerance F to Remove Lambda 1,000 17,917,595 29,413,641,595 6,457, Variables Not in the Analysis,. F- :,,., 0,1 F- (2,71). F-. 6. Wilks' Lambda Variables Entered/Removed (. 47),. 7. Eigenvalues ( ), 20 (95,5% 4,5%). Eigenvalues Function 1 2 Canonical Eigenvalue % of Variance Cumulative % Correlation 8,327 a 95,5 95,5,945,396 a 4,5 100,0,533 First 2 canonical discriminant functions were used in the analysis. a. 8. Wilks' Lambda =0,077 (Sig 0) =0,716 (Sig = 0,092).,., 157
159 .,. (Sig < 0,05). Test of Function(s) 1 through 2 2 Wilks' Lambda Wilks' Lambda Chi-square df Sig.,077 21,817 4,000,716 2,837 1, (Standardized Canonical Discriminant Function Coefficients).,. Standardized Canonical Discriminant Function Coefficients Function 1 2 -,959,872 1,283, : Canonical Discriminant Function Coefficients (Constant) Function 1 2 -,393,357,809,117-4,328-3,454 Unstandardized coefficients 11. (Structure Matrix),.,,.,,. 158
160 Structure Matrix a a a a a a Function 1 2,521*,177,506*,175 -,14,990*,672,740* -,10,671* -,26,559*,274,519*,185,291* *. Largest absolute correlation between each variable and any discriminant function a. This variable not used in the analysis. 12. (Functions at Group Centroids),. 1, 2 1, 3. Functions at Group Centroids Function 1 2-3,420,195 2,308,403,713 -,931 Unstandardized canonical discriminant functions evaluated at group means
161 Prior Probabilities for Groups Total Cases Used in Analysis Prior Unweighted Weighted, ,000, ,000, ,000 1, , Territorial Map. *. Territorial Map Canonical Discriminant Function 2-6,0-4,0-2,0,0 2,0 4,0 6, , * *, * , ,0-4,0-2,0,0 2,0 4,0 6,0 Canonical Discriminant Function 1 _Symbols used in territorial map Symbol Group Label * Indicates a group centroid 15. (Casewise Statistics) : (Actual) (Predicted).,,. 160
162 14, Casewise Statistics Original Highest Group Second Highest Group Discriminant Scores Case Number Actual Group Predicted Group P(D>d G=g) P(G=g D=d) p df Squared Mahalanobis Distance to Centroid Group P(G=g D=d) Squared Mahalanobis Distance to Centroid Function 1 Function ,702 2,967,708 2,033 8,493,157-1, ,653 2,840,854 3,160 3,142 1,430, , ,000 1,460 3,000 29,860-4,628,227 ungrouped 1, ,000 1,460 3,000 29,860-4,628, ,540 2,999 1,232 3,001 14,410-2,593, ,260 2,583 2,690 2,417 4,384 2,169-1, ,433 2,972 1,676 3,028 7,764 3,418 -, ,300 2,997 2,410 3,003 12,782 3,072 1, , ,000,239 3,000 15,791-3,009,461 ungrouped 2,100 2,512 4,606 3,484 3,698,228, ,920 2,957,168 3,043 5,352 2,239, ,255 2,760 2,735 2,220 6,238 -,189, , ,000 1,080 3,000 17,327-3,449 -, **,798 2,654,451 2,346 2,748 1,383 -,972 **. Misclassified case 16. Classification Results, 11/12=91,7%. -. (..13) pini = (0,333*4)+(0,417*5)+(0,250*3) = 4, ,17 = 6,833, 12 4,17=7, ,872, 87,2% ( 1 8). 161
163 Original Count % Classification Results a Ungrouped cases Ungrouped cases Predicted Group Membership Total ,0,0,0 100,0,0 80,0 20,0 100,0,0,0 100,0 100,0 50,0 50,0,0 100,0 a. 91,7% of original grouped cases correctly classified. 17. (Canonical Discriminant Functions).. 2,0 Canonical Discriminant Function 1,5 1,0,5 0,0 1 2 Function 2 -,5-1,0-1,5-2, Group Centroid Ungrouped Case Function 1 162
164 1 (Statistica). Statistica (Statistics > Multivariate Exploratory Techniques > Discriminant Function Analysis) Model Definition Descriptives ( Within) (Pooled within-groups covariance & correlation), (Means & number of cases), (Within groups standart deviations) ( ). (Categorized histogram, Box plot of means, Categorized scatterplot, Categorized normal probability plot), All Cases (Total correlation...), (Plot of total correlations) (Box plot of means).. 60 Model Definition,. 60 ( ). (0,01),,. 163
165 Quick Discriminant Function Analysis Results, ( 49 50): 49 Discriminant Function Analysis Summary (liri) Step 2, N of vars in model: 2; Grouping: group (3 grps) Wilks' Lambda:,07679 approx. F (4,16)=10,435 p<,0002 Wilks' Partial F-remove p-level Toler. 1-Toler. N=12 Lambda Lambda (2,8) (R-Sqr.) v8 v3 0, , , , , , , , , , , , N=12 v1 v2 v4 v5 v6 v7 50 Variables currently not in the model (liri) Df for all F-tests: 2,7 Wilks' Partial F to p-level Toler. 1-Toler. Lambda Lambda enter (R-Sqr.) 0, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , SPSS ( ). Advanced (Stepwize Analysis Summary).3 1 (Perform canonical analysis). Advanced Canonical Analysis : 1) (Chi square tests of successive roots) ; 2).9 1 ; 3) (Factor structure Matrix), (.11 1 ); 164
166 4) (Means of canonical variables). Canonical scores Canonical Analysis : 1) (Scatterplot of canonical scores); 3,0 2,5 2,0 1,5 1,0 Root 1 vs. Root 2 Root 2 0,5 0,0-0,5-1,0-1,5-2,0-2, Root 1 G_1:1 G_2:2 G_3: ) (Canonical scores for each case) ( SPSS 48); 3) SPSS, (Save canonical score). Classification Discriminant Function Analysis Results. 62: 1) (Classification functions) ( ); 2) (Classification matrix)., SPSS (.16 1 ), 91,7%. 165
167 3) (Classification of cases). SPSS 48. 4) (Squared Mahalanobis distances) SPSS 48;. 62 Classification Matrix (liri) Rows: Observed classifications Columns: Predicted classifications Group Percent Correct G_1:1 p=,33333 G_2:2 p=,41667 G_3:3 p=,25000 G_1:1 100, G_2:2 80, G_3:3 100, Total 91, ) ( 51) (Posterior probabilities) ( 48 1 ) ( )., 166
168 , Classification Proportional to group sizes (. 62). 51 Posterior Probabilities (liri in Workbook4) Incorrect classifications are marked with * Observed G_1:1 G_2:2 G_3:3 Case Classif. p=,33333 p=,41667 p=, G_3:3 0, , , G_2:2 0, , , G_1:1 0, , , , , , G_1:1 0, , , G_3:3 0, , , G_2:2 0, , , G_2:2 0, , , G_1:1 0, , , , , , G_2:2 0, , , G_3:3 0, , , G 1:1 0, , ,000222,.,,,,., 4 1, 10 2, 14 3, /. 1..? 2.?? 3.?? 4., - 2?? 167
169 5.?? 6.??? ( )? 7. F-? 8.? 9.,? 10.? 168
170 8: : : ; ( ); ; ; ;. :, ;,, ; -,.. (. cluster,,,, ),. : ; ; ;.,, (, ).. :.,
171 . 50 [7]. :. : ; ; ( )., ( ). ( ) (AGglomerative NESting, AGNES)..,. ( ) (Divisive ANAlysis, DIANA),.,. (. 65).. : 1) ; 2), 170
172 ( ), ( ).,., : ;. : ; ( / );.. : r i j n k 1 ( x ik x n n 2 ( xik xi ) k 1 k 1 i )( x jk ( x x jk j ) x., : 1) d m 2 2i j ( xit x jt ) t 1, t. 2 2 d2 i j ( xi x j ) ( yi y j ). d ij i- j-, X Y. 2) ( ): d H i j m t 1 x it x jt, ( ), ; j )
173 3) : d i j max xit x jt, 1 t m, ; 4) : d M (x i, x j ) = (x i x j ) S -1 (x i x j ) t, ; m 1 xit x jt 5) : d Li j, m t 1 xit x jt,.. [24]. a, b, c, d 2 2-, a d S, 1, a b c d a J a b c. 1 a b 0 c d ( ) : 172 s ij p k 1 S ijk / p k 1 W ijk, W ijk, 1, k, 0 ; S ijk i j k.,,. (, ).,, 1) ( ) ;
174 2) ( ) ; 3) ; 4) ( ); 5) ; 6) :,. SPSS Statistica Cluster Method Amalgamation (linkage) rule Between-group linkage Weighted pair-group average Within-group Unweighted pairgroup linkage average Nearest neightbor Single linkage Furthest neighbor Complete linkage Centroid clustering Unweighted pairgroup centroid Median clustering Weighted pair-group centroid (median) Ward s method Ward s method Measure Distance measure Interval Squared Euclidian Squared Euclidian distance distance Euclidian distance Euclidian distance Block City-block (Manhattan) distances 173
175 Chebyshev Chebyshev distance metric Minkowski Power: SUM(ABS(xy)**p)**1/r Percent disagreement Pearson correlation 1-Pearson r 1 Cosine (1 ) Count Chi-square measure - Phi- square measure Binary 17 Jaccard - ( ) :. k-. k ( ). k..,.,, 17, SPSS. 174
176 ,,.,. k-,,, ( ).,,. Fuzzy C-Means - [7]. [6]. - [24]. 1: SPSS: Analyze > Classify > Hierarchical Cluster... (Variable(s)),. 3. : Plots... Dendrogram; Method (Cluster Method) Betweengroup linkage, (Measure) Squared Euclidian distance. 18 (. 3,. 184).,, SPSS,,. 175
177 , Save (Single solution). 4. : (..173). 5.,, ( ). 6. Statistica: Statistics > Multivariate Exploratory Technics > Cluster Analysis > Joining (Tree Clustering). : (Variables), (Amalgamation rule), (Distance measure), 19 (Input file) : (Cases (rows)) 20 (Variables (columns)). 7. Advanced. 7.,. 2: 1. (, 2-4). 2. k- SPSS: Analyze > Classify > K-Means Cluster Analysis. : (Variables) (Number of clusters. Iterate (Maximum Iterations) Use running means. Save Options. 3. Final Cluster Centers. 4. k- Statistica (Kmeans clustering), 19 (Raw data) (Distance Matrix). 20, (Raw data). 176
178 : (Variables), 21 (Cluster), (Number of clusters), (Number of iterations) 22 (Initial cluster centers). 5.,.. 1: ( ). SPSS, 46 (.153).,,. Cluster Variables (. 63) Sort distances and take observations at constant intervals. 177
179 . 64 Statistica,, Plots Dendrogram. Method (. 64). (. 65),,.. 65 Agglomaration Schedule Vertical Icicle. 178
SPSS Guide For MMI 409
SPSS Guide For MMI 409 by John Wong March 2012 Preface Hopefully, this document can provide some guidance to MMI 409 students on how to use SPSS to solve many of the problems covered in the D Agostino
More informationSPSS LAB FILE 1
SPSS LAB FILE www.mcdtu.wordpress.com 1 www.mcdtu.wordpress.com 2 www.mcdtu.wordpress.com 3 OBJECTIVE 1: Transporation of Data Set to SPSS Editor INPUTS: Files: group1.xlsx, group1.txt PROCEDURE FOLLOWED:
More informationTextbook Examples of. SPSS Procedure
Textbook s of IBM SPSS Procedures Each SPSS procedure listed below has its own section in the textbook. These sections include a purpose statement that describes the statistical test, identification of
More informationDescriptive Statistics
*following creates z scores for the ydacl statedp traitdp and rads vars. *specifically adding the /SAVE subcommand to descriptives will create z. *scores for whatever variables are in the command. DESCRIPTIVES
More informationANCOVA. Psy 420 Andrew Ainsworth
ANCOVA Psy 420 Andrew Ainsworth What is ANCOVA? Analysis of covariance an extension of ANOVA in which main effects and interactions are assessed on DV scores after the DV has been adjusted for by the DV
More informationDegrees of freedom df=1. Limitations OR in SPSS LIM: Knowing σ and µ is unlikely in large
Z Test Comparing a group mean to a hypothesis T test (about 1 mean) T test (about 2 means) Comparing mean to sample mean. Similar means = will have same response to treatment Two unknown means are different
More informationDETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics
DETAILED CONTENTS About the Author Preface to the Instructor To the Student How to Use SPSS With This Book PART I INTRODUCTION AND DESCRIPTIVE STATISTICS 1. Introduction to Statistics 1.1 Descriptive and
More informationunadjusted model for baseline cholesterol 22:31 Monday, April 19,
unadjusted model for baseline cholesterol 22:31 Monday, April 19, 2004 1 Class Level Information Class Levels Values TRETGRP 3 3 4 5 SEX 2 0 1 Number of observations 916 unadjusted model for baseline cholesterol
More informationESP 178 Applied Research Methods. 2/23: Quantitative Analysis
ESP 178 Applied Research Methods 2/23: Quantitative Analysis Data Preparation Data coding create codebook that defines each variable, its response scale, how it was coded Data entry for mail surveys and
More informationFrequency Distribution Cross-Tabulation
Frequency Distribution Cross-Tabulation 1) Overview 2) Frequency Distribution 3) Statistics Associated with Frequency Distribution i. Measures of Location ii. Measures of Variability iii. Measures of Shape
More informationGlossary. The ISI glossary of statistical terms provides definitions in a number of different languages:
Glossary The ISI glossary of statistical terms provides definitions in a number of different languages: http://isi.cbs.nl/glossary/index.htm Adjusted r 2 Adjusted R squared measures the proportion of the
More informationAssoc.Prof.Dr. Wolfgang Feilmayr Multivariate Methods in Regional Science: Regression and Correlation Analysis REGRESSION ANALYSIS
REGRESSION ANALYSIS Regression Analysis can be broadly defined as the analysis of statistical relationships between one dependent and one or more independent variables. Although the terms dependent and
More informationContents. Acknowledgments. xix
Table of Preface Acknowledgments page xv xix 1 Introduction 1 The Role of the Computer in Data Analysis 1 Statistics: Descriptive and Inferential 2 Variables and Constants 3 The Measurement of Variables
More informationNon-parametric tests, part A:
Two types of statistical test: Non-parametric tests, part A: Parametric tests: Based on assumption that the data have certain characteristics or "parameters": Results are only valid if (a) the data are
More informationRepeated-Measures ANOVA in SPSS Correct data formatting for a repeated-measures ANOVA in SPSS involves having a single line of data for each
Repeated-Measures ANOVA in SPSS Correct data formatting for a repeated-measures ANOVA in SPSS involves having a single line of data for each participant, with the repeated measures entered as separate
More informationN Utilization of Nursing Research in Advanced Practice, Summer 2008
University of Michigan Deep Blue deepblue.lib.umich.edu 2008-07 536 - Utilization of ursing Research in Advanced Practice, Summer 2008 Tzeng, Huey-Ming Tzeng, H. (2008, ctober 1). Utilization of ursing
More informationExam details. Final Review Session. Things to Review
Exam details Final Review Session Short answer, similar to book problems Formulae and tables will be given You CAN use a calculator Date and Time: Dec. 7, 006, 1-1:30 pm Location: Osborne Centre, Unit
More informationCHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)
FROM: PAGANO, R. R. (007) I. INTRODUCTION: DISTINCTION BETWEEN PARAMETRIC AND NON-PARAMETRIC TESTS Statistical inference tests are often classified as to whether they are parametric or nonparametric Parameter
More informationsphericity, 5-29, 5-32 residuals, 7-1 spread and level, 2-17 t test, 1-13 transformations, 2-15 violations, 1-19
additive tree structure, 10-28 ADDTREE, 10-51, 10-53 EXTREE, 10-31 four point condition, 10-29 ADDTREE, 10-28, 10-51, 10-53 adjusted R 2, 8-7 ALSCAL, 10-49 ANCOVA, 9-1 assumptions, 9-5 example, 9-7 MANOVA
More informationPrepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti
Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang Use in experiment, quasi-experiment
More informationGeneral Linear Model
GLM V1 V2 V3 V4 V5 V11 V12 V13 V14 V15 /WSFACTOR=placeholders 2 Polynomial target 5 Polynomial /METHOD=SSTYPE(3) /EMMEANS=TABLES(OVERALL) /EMMEANS=TABLES(placeholders) COMPARE ADJ(SIDAK) /EMMEANS=TABLES(target)
More informationResearch Methodology Statistics Comprehensive Exam Study Guide
Research Methodology Statistics Comprehensive Exam Study Guide References Glass, G. V., & Hopkins, K. D. (1996). Statistical methods in education and psychology (3rd ed.). Boston: Allyn and Bacon. Gravetter,
More informationIn many situations, there is a non-parametric test that corresponds to the standard test, as described below:
There are many standard tests like the t-tests and analyses of variance that are commonly used. They rest on assumptions like normality, which can be hard to assess: for example, if you have small samples,
More informationT. Mark Beasley One-Way Repeated Measures ANOVA handout
T. Mark Beasley One-Way Repeated Measures ANOVA handout Profile Analysis Example In the One-Way Repeated Measures ANOVA, two factors represent separate sources of variance. Their interaction presents an
More informationBasic Statistical Analysis
indexerrt.qxd 8/21/2002 9:47 AM Page 1 Corrected index pages for Sprinthall Basic Statistical Analysis Seventh Edition indexerrt.qxd 8/21/2002 9:47 AM Page 656 Index Abscissa, 24 AB-STAT, vii ADD-OR rule,
More informationGlossary for the Triola Statistics Series
Glossary for the Triola Statistics Series Absolute deviation The measure of variation equal to the sum of the deviations of each value from the mean, divided by the number of values Acceptance sampling
More informationSolutions exercises of Chapter 7
Solutions exercises of Chapter 7 Exercise 1 a. These are paired samples: each pair of half plates will have about the same level of corrosion, so the result of polishing by the two brands of polish are
More information" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2
Notation and Equations for Final Exam Symbol Definition X The variable we measure in a scientific study n The size of the sample N The size of the population M The mean of the sample µ The mean of the
More informationTypes of Statistical Tests DR. MIKE MARRAPODI
Types of Statistical Tests DR. MIKE MARRAPODI Tests t tests ANOVA Correlation Regression Multivariate Techniques Non-parametric t tests One sample t test Independent t test Paired sample t test One sample
More informationWeek 7.1--IES 612-STA STA doc
Week 7.1--IES 612-STA 4-573-STA 4-576.doc IES 612/STA 4-576 Winter 2009 ANOVA MODELS model adequacy aka RESIDUAL ANALYSIS Numeric data samples from t populations obtained Assume Y ij ~ independent N(μ
More informationChapter 8 (More on Assumptions for the Simple Linear Regression)
EXST3201 Chapter 8b Geaghan Fall 2005: Page 1 Chapter 8 (More on Assumptions for the Simple Linear Regression) Your textbook considers the following assumptions: Linearity This is not something I usually
More informationChapter Fifteen. Frequency Distribution, Cross-Tabulation, and Hypothesis Testing
Chapter Fifteen Frequency Distribution, Cross-Tabulation, and Hypothesis Testing Copyright 2010 Pearson Education, Inc. publishing as Prentice Hall 15-1 Internet Usage Data Table 15.1 Respondent Sex Familiarity
More informationAppendix A Summary of Tasks. Appendix Table of Contents
Appendix A Summary of Tasks Appendix Table of Contents Reporting Tasks...357 ListData...357 Tables...358 Graphical Tasks...358 BarChart...358 PieChart...359 Histogram...359 BoxPlot...360 Probability Plot...360
More informationNemours Biomedical Research Statistics Course. Li Xie Nemours Biostatistics Core October 14, 2014
Nemours Biomedical Research Statistics Course Li Xie Nemours Biostatistics Core October 14, 2014 Outline Recap Introduction to Logistic Regression Recap Descriptive statistics Variable type Example of
More information1 Introduction to Minitab
1 Introduction to Minitab Minitab is a statistical analysis software package. The software is freely available to all students and is downloadable through the Technology Tab at my.calpoly.edu. When you
More informationStatistics Handbook. All statistical tables were computed by the author.
Statistics Handbook Contents Page Wilcoxon rank-sum test (Mann-Whitney equivalent) Wilcoxon matched-pairs test 3 Normal Distribution 4 Z-test Related samples t-test 5 Unrelated samples t-test 6 Variance
More informationTurning a research question into a statistical question.
Turning a research question into a statistical question. IGINAL QUESTION: Concept Concept Concept ABOUT ONE CONCEPT ABOUT RELATIONSHIPS BETWEEN CONCEPTS TYPE OF QUESTION: DESCRIBE what s going on? DECIDE
More informationReadings Howitt & Cramer (2014) Overview
Readings Howitt & Cramer (4) Ch 7: Relationships between two or more variables: Diagrams and tables Ch 8: Correlation coefficients: Pearson correlation and Spearman s rho Ch : Statistical significance
More informationUnit 14: Nonparametric Statistical Methods
Unit 14: Nonparametric Statistical Methods Statistics 571: Statistical Methods Ramón V. León 8/8/2003 Unit 14 - Stat 571 - Ramón V. León 1 Introductory Remarks Most methods studied so far have been based
More informationGLM Repeated-measures designs: One within-subjects factor
GLM Repeated-measures designs: One within-subjects factor Reading: SPSS dvanced Models 9.0: 2. Repeated Measures Homework: Sums of Squares for Within-Subject Effects Download: glm_withn1.sav (Download
More informationDISCOVERING STATISTICS USING R
DISCOVERING STATISTICS USING R ANDY FIELD I JEREMY MILES I ZOE FIELD Los Angeles London New Delhi Singapore j Washington DC CONTENTS Preface How to use this book Acknowledgements Dedication Symbols used
More informationEntering and recoding variables
Entering and recoding variables To enter: You create a New data file Define the variables on Variable View Enter the values on Data View To create the dichotomies: Transform -> Recode into Different Variable
More information3 Joint Distributions 71
2.2.3 The Normal Distribution 54 2.2.4 The Beta Density 58 2.3 Functions of a Random Variable 58 2.4 Concluding Remarks 64 2.5 Problems 64 3 Joint Distributions 71 3.1 Introduction 71 3.2 Discrete Random
More informationHypothesis testing, part 2. With some material from Howard Seltman, Blase Ur, Bilge Mutlu, Vibha Sazawal
Hypothesis testing, part 2 With some material from Howard Seltman, Blase Ur, Bilge Mutlu, Vibha Sazawal 1 CATEGORICAL IV, NUMERIC DV 2 Independent samples, one IV # Conditions Normal/Parametric Non-parametric
More informationIntroduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p.
Preface p. xi Introduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p. 6 The Scientific Method and the Design of
More informationUNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator
UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS Duration - 3 hours Aids Allowed: Calculator LAST NAME: FIRST NAME: STUDENT NUMBER: There are 27 pages
More informationY (Nominal/Categorical) 1. Metric (interval/ratio) data for 2+ IVs, and categorical (nominal) data for a single DV
1 Neuendorf Discriminant Analysis The Model X1 X2 X3 X4 DF2 DF3 DF1 Y (Nominal/Categorical) Assumptions: 1. Metric (interval/ratio) data for 2+ IVs, and categorical (nominal) data for a single DV 2. Linearity--in
More informationReadings Howitt & Cramer (2014)
Readings Howitt & Cramer (014) Ch 7: Relationships between two or more variables: Diagrams and tables Ch 8: Correlation coefficients: Pearson correlation and Spearman s rho Ch 11: Statistical significance
More informationSection 4.6 Simple Linear Regression
Section 4.6 Simple Linear Regression Objectives ˆ Basic philosophy of SLR and the regression assumptions ˆ Point & interval estimation of the model parameters, and how to make predictions ˆ Point and interval
More informationPsy 420 Final Exam Fall 06 Ainsworth. Key Name
Psy 40 Final Exam Fall 06 Ainsworth Key Name Psy 40 Final A researcher is studying the effect of Yoga, Meditation, Anti-Anxiety Drugs and taking Psy 40 and the anxiety levels of the participants. Twenty
More informationAnalysis of 2x2 Cross-Over Designs using T-Tests
Chapter 234 Analysis of 2x2 Cross-Over Designs using T-Tests Introduction This procedure analyzes data from a two-treatment, two-period (2x2) cross-over design. The response is assumed to be a continuous
More informationSTATISTICS ( CODE NO. 08 ) PAPER I PART - I
STATISTICS ( CODE NO. 08 ) PAPER I PART - I 1. Descriptive Statistics Types of data - Concepts of a Statistical population and sample from a population ; qualitative and quantitative data ; nominal and
More informationMultivariate Tests. Mauchly's Test of Sphericity
General Model Within-Sujects Factors Dependent Variale IDLS IDLF IDHS IDHF IDHCLS IDHCLF Descriptive Statistics IDLS IDLF IDHS IDHF IDHCLS IDHCLF Mean Std. Deviation N.0.70.0.0..8..88.8...97 Multivariate
More informationHandout 1: Predicting GPA from SAT
Handout 1: Predicting GPA from SAT appsrv01.srv.cquest.utoronto.ca> appsrv01.srv.cquest.utoronto.ca> ls Desktop grades.data grades.sas oldstuff sasuser.800 appsrv01.srv.cquest.utoronto.ca> cat grades.data
More informationSPSS Output. ANOVA a b Residual Coefficients a Standardized Coefficients
SPSS Output Homework 1-1e ANOVA a Sum of Squares df Mean Square F Sig. 1 Regression 351.056 1 351.056 11.295.002 b Residual 932.412 30 31.080 Total 1283.469 31 a. Dependent Variable: Sexual Harassment
More informationEDF 7405 Advanced Quantitative Methods in Educational Research MULTR.SAS
EDF 7405 Advanced Quantitative Methods in Educational Research MULTR.SAS The data used in this example describe teacher and student behavior in 8 classrooms. The variables are: Y percentage of interventions
More informationWORKSHOP 3 Measuring Association
WORKSHOP 3 Measuring Association Concepts Analysing Categorical Data o Testing of Proportions o Contingency Tables & Tests o Odds Ratios Linear Association Measures o Correlation o Simple Linear Regression
More informationTopic 23: Diagnostics and Remedies
Topic 23: Diagnostics and Remedies Outline Diagnostics residual checks ANOVA remedial measures Diagnostics Overview We will take the diagnostics and remedial measures that we learned for regression and
More informationParametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami
Parametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami Parametric Assumptions The observations must be independent. Dependent variable should be continuous
More informationCorrelations. Notes. Output Created Comments 04-OCT :34:52
Correlations Output Created Comments Input Missing Value Handling Syntax Resources Notes Data Active Dataset Filter Weight Split File N of Rows in Working Data File Definition of Missing Cases Used Processor
More informationFormulas and Tables by Mario F. Triola
Copyright 010 Pearson Education, Inc. Ch. 3: Descriptive Statistics x f # x x f Mean 1x - x s - 1 n 1 x - 1 x s 1n - 1 s B variance s Ch. 4: Probability Mean (frequency table) Standard deviation P1A or
More informationANOVA in SPSS. Hugo Quené. opleiding Taalwetenschap Universiteit Utrecht Trans 10, 3512 JK Utrecht.
ANOVA in SPSS Hugo Quené hugo.quene@let.uu.nl opleiding Taalwetenschap Universiteit Utrecht Trans 10, 3512 JK Utrecht 7 Oct 2005 1 introduction In this example I ll use fictitious data, taken from http://www.ruf.rice.edu/~mickey/psyc339/notes/rmanova.html.
More informationTransition Passage to Descriptive Statistics 28
viii Preface xiv chapter 1 Introduction 1 Disciplines That Use Quantitative Data 5 What Do You Mean, Statistics? 6 Statistics: A Dynamic Discipline 8 Some Terminology 9 Problems and Answers 12 Scales of
More informationStatistical. Psychology
SEVENTH у *i km m it* & П SB Й EDITION Statistical M e t h o d s for Psychology D a v i d C. Howell University of Vermont ; \ WADSWORTH f% CENGAGE Learning* Australia Biaall apan Korea Меяко Singapore
More informationNonparametric Statistics. Leah Wright, Tyler Ross, Taylor Brown
Nonparametric Statistics Leah Wright, Tyler Ross, Taylor Brown Before we get to nonparametric statistics, what are parametric statistics? These statistics estimate and test population means, while holding
More informationCan you tell the relationship between students SAT scores and their college grades?
Correlation One Challenge Can you tell the relationship between students SAT scores and their college grades? A: The higher SAT scores are, the better GPA may be. B: The higher SAT scores are, the lower
More informationGLM Repeated Measures
GLM Repeated Measures Notation The GLM (general linear model) procedure provides analysis of variance when the same measurement or measurements are made several times on each subject or case (repeated
More informationThis is a Randomized Block Design (RBD) with a single factor treatment arrangement (2 levels) which are fixed.
EXST3201 Chapter 13c Geaghan Fall 2005: Page 1 Linear Models Y ij = µ + βi + τ j + βτij + εijk This is a Randomized Block Design (RBD) with a single factor treatment arrangement (2 levels) which are fixed.
More informationData are sometimes not compatible with the assumptions of parametric statistical tests (i.e. t-test, regression, ANOVA)
BSTT523 Pagano & Gauvreau Chapter 13 1 Nonparametric Statistics Data are sometimes not compatible with the assumptions of parametric statistical tests (i.e. t-test, regression, ANOVA) In particular, data
More informationSAS Procedures Inference about the Line ffl model statement in proc reg has many options ffl To construct confidence intervals use alpha=, clm, cli, c
Inference About the Slope ffl As with all estimates, ^fi1 subject to sampling var ffl Because Y jx _ Normal, the estimate ^fi1 _ Normal A linear combination of indep Normals is Normal Simple Linear Regression
More informationGROUPED DATA E.G. FOR SAMPLE OF RAW DATA (E.G. 4, 12, 7, 5, MEAN G x / n STANDARD DEVIATION MEDIAN AND QUARTILES STANDARD DEVIATION
FOR SAMPLE OF RAW DATA (E.G. 4, 1, 7, 5, 11, 6, 9, 7, 11, 5, 4, 7) BE ABLE TO COMPUTE MEAN G / STANDARD DEVIATION MEDIAN AND QUARTILES Σ ( Σ) / 1 GROUPED DATA E.G. AGE FREQ. 0-9 53 10-19 4...... 80-89
More informationSubject CS1 Actuarial Statistics 1 Core Principles
Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and
More informationANOVA Longitudinal Models for the Practice Effects Data: via GLM
Psyc 943 Lecture 25 page 1 ANOVA Longitudinal Models for the Practice Effects Data: via GLM Model 1. Saturated Means Model for Session, E-only Variances Model (BP) Variances Model: NO correlation, EQUAL
More informationREVIEW 8/2/2017 陈芳华东师大英语系
REVIEW Hypothesis testing starts with a null hypothesis and a null distribution. We compare what we have to the null distribution, if the result is too extreme to belong to the null distribution (p
More informationChapter 13 Correlation
Chapter Correlation Page. Pearson correlation coefficient -. Inferential tests on correlation coefficients -9. Correlational assumptions -. on-parametric measures of correlation -5 5. correlational example
More informationTABLES AND FORMULAS FOR MOORE Basic Practice of Statistics
TABLES AND FORMULAS FOR MOORE Basic Practice of Statistics Exploring Data: Distributions Look for overall pattern (shape, center, spread) and deviations (outliers). Mean (use a calculator): x = x 1 + x
More informationBasics on t-tests Independent Sample t-tests Single-Sample t-tests Summary of t-tests Multiple Tests, Effect Size Proportions. Statistiek I.
Statistiek I t-tests John Nerbonne CLCG, Rijksuniversiteit Groningen http://www.let.rug.nl/nerbonne/teach/statistiek-i/ John Nerbonne 1/46 Overview 1 Basics on t-tests 2 Independent Sample t-tests 3 Single-Sample
More informationSP S SS PSS SS.! " # $ "
SPSS SPSS.!"# $" "# $% &'( " ) ''$%" FE 5,"- II ')" $. "/ #0 -$%.$.54 6 ( #6."0 6 SPSS )#0-7 0"8".)90 6, 7:9".)90 ' "-' '7 7#6# ' "7&9"#-:. (http://www.watpon.com) '6 ')..$.54 7/ " '"'( 6 '6 '). 54 :'.'
More informationThe entire data set consists of n = 32 widgets, 8 of which were made from each of q = 4 different materials.
One-Way ANOVA Summary The One-Way ANOVA procedure is designed to construct a statistical model describing the impact of a single categorical factor X on a dependent variable Y. Tests are run to determine
More informationEXST7015: Estimating tree weights from other morphometric variables Raw data print
Simple Linear Regression SAS example Page 1 1 ********************************************; 2 *** Data from Freund & Wilson (1993) ***; 3 *** TABLE 8.24 : ESTIMATING TREE WEIGHTS ***; 4 ********************************************;
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2018 Examinations Subject CT3 Probability and Mathematical Statistics Core Technical Syllabus 1 June 2017 Aim The
More informationBusiness Statistics. Lecture 10: Course Review
Business Statistics Lecture 10: Course Review 1 Descriptive Statistics for Continuous Data Numerical Summaries Location: mean, median Spread or variability: variance, standard deviation, range, percentiles,
More informationFrom Practical Data Analysis with JMP, Second Edition. Full book available for purchase here. About This Book... xiii About The Author...
From Practical Data Analysis with JMP, Second Edition. Full book available for purchase here. Contents About This Book... xiii About The Author... xxiii Chapter 1 Getting Started: Data Analysis with JMP...
More informationData Analysis. Associate.Prof.Dr.Ratana Sapbamrer Department of Community Medicine, Faculty of Medicine Chiang Mai University
Data Analysis Associate.Prof.Dr.Ratana Sapbamrer Department of Community Medicine, Faculty of Medicine Chiang Mai University Topic Outline Data analysis for descriptive statistics (qualitative data) Data
More informationCorrelation. A statistics method to measure the relationship between two variables. Three characteristics
Correlation Correlation A statistics method to measure the relationship between two variables Three characteristics Direction of the relationship Form of the relationship Strength/Consistency Direction
More informationIntroduction to Linear regression analysis. Part 2. Model comparisons
Introduction to Linear regression analysis Part Model comparisons 1 ANOVA for regression Total variation in Y SS Total = Variation explained by regression with X SS Regression + Residual variation SS Residual
More informationMANOVA is an extension of the univariate ANOVA as it involves more than one Dependent Variable (DV). The following are assumptions for using MANOVA:
MULTIVARIATE ANALYSIS OF VARIANCE MANOVA is an extension of the univariate ANOVA as it involves more than one Dependent Variable (DV). The following are assumptions for using MANOVA: 1. Cell sizes : o
More informationIntroduction to Statistical Analysis
Introduction to Statistical Analysis Changyu Shen Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology Beth Israel Deaconess Medical Center Harvard Medical School Objectives Descriptive
More informationBIOS 6222: Biostatistics II. Outline. Course Presentation. Course Presentation. Review of Basic Concepts. Why Nonparametrics.
BIOS 6222: Biostatistics II Instructors: Qingzhao Yu Don Mercante Cruz Velasco 1 Outline Course Presentation Review of Basic Concepts Why Nonparametrics The sign test 2 Course Presentation Contents Justification
More informationDATA ANALYSIS. Faculty of Civil Engineering
DATA ANALYSIS Faculty of Civil Engineering DATA DATA - Introduction Data is a collection of facts, such as numbers, words, measurements, observations or even just descriptions of things. Qualitative data
More information1 A Review of Correlation and Regression
1 A Review of Correlation and Regression SW, Chapter 12 Suppose we select n = 10 persons from the population of college seniors who plan to take the MCAT exam. Each takes the test, is coached, and then
More informationIntroduction to inferential statistics. Alissa Melinger IGK summer school 2006 Edinburgh
Introduction to inferential statistics Alissa Melinger IGK summer school 2006 Edinburgh Short description Prereqs: I assume no prior knowledge of stats This half day tutorial on statistical analysis will
More informationRank-Based Methods. Lukas Meier
Rank-Based Methods Lukas Meier 20.01.2014 Introduction Up to now we basically always used a parametric family, like the normal distribution N (µ, σ 2 ) for modeling random data. Based on observed data
More informationMcGill University. Faculty of Science MATH 204 PRINCIPLES OF STATISTICS II. Final Examination
McGill University Faculty of Science MATH 204 PRINCIPLES OF STATISTICS II Final Examination Date: 20th April 2009 Time: 9am-2pm Examiner: Dr David A Stephens Associate Examiner: Dr Russell Steele Please
More informationSTATISTICS REVIEW. D. Parameter: a constant for the case or population under consideration.
STATISTICS REVIEW I. Why do we need statistics? A. As human beings, we consciously and unconsciously evaluate whether variables affect phenomena of interest, but sometimes our common sense reasoning is
More informationAnalysis of Covariance (ANCOVA) with Two Groups
Chapter 226 Analysis of Covariance (ANCOVA) with Two Groups Introduction This procedure performs analysis of covariance (ANCOVA) for a grouping variable with 2 groups and one covariate variable. This procedure
More informationTaguchi Method and Robust Design: Tutorial and Guideline
Taguchi Method and Robust Design: Tutorial and Guideline CONTENT 1. Introduction 2. Microsoft Excel: graphing 3. Microsoft Excel: Regression 4. Microsoft Excel: Variance analysis 5. Robust Design: An Example
More informationAnalysis of repeated measurements (KLMED8008)
Analysis of repeated measurements (KLMED8008) Eirik Skogvoll, MD PhD Professor and Consultant Institute of Circulation and Medical Imaging Dept. of Anaesthesiology and Emergency Medicine 1 Day 2 Practical
More informationAnalyzing Small Sample Experimental Data
Analyzing Small Sample Experimental Data Session 2: Non-parametric tests and estimators I Dominik Duell (University of Essex) July 15, 2017 Pick an appropriate (non-parametric) statistic 1. Intro to non-parametric
More informationIn Class Review Exercises Vartanian: SW 540
In Class Review Exercises Vartanian: SW 540 1. Given the following output from an OLS model looking at income, what is the slope and intercept for those who are black and those who are not black? b SE
More information