Correlation Coefficient Y = 0 Y = 1 = 0 ß11 ß12 = 1 ß21 ß22 Product moment correlation coefficient: ρ = Corr(; Y ) E() = ß 2+ = ß 21 + ß 22 = E(Y ) E()E(Y ) q V ()V (Y ) E(Y ) = ß 2+ = ß 21 + ß 22 = ß 22 ß2+ ß +2 p ß 1+ ß 2+ ß +1 ß +2 E(Y ) = ß 22 V () = ß 2+ (1 ß 2+ ) = ß 1+ ß 2+ V (Y ) = ß +2 (1 ß +2 ) = ß +1 ß +2 529 530 Estimation: Properties: r = p 11p22 p12p21 p p 1+ p 2+ p +1 p +2 = Y 11Y22 Y12Y21 r Y 1+ Y 2+ Y +1 Y +2 1. 1» ρ» 1 2. Independence model ) ρ = 0 3. ρ = 1 when ß12 = ß21 = 0 ρ = 01 when ß11 = ß22 = 0 4. ρ is margin sensitive Note that and r 2 = 2 n 2 = n(y 11Y22 Y12Y21) 2 r Y1+ Y 2+ Y +1 Y +2 for 2 2 tables. 531 532
Measures of Association for I J tables based on Pearson's 2 Φ 2 = Note that I 2 = I where = n J i=1 j=1 J i=1 j=1 I i=1 j=1 (ß ij ß i+ ß +j ) 2 ß i+ ß +j Y ij Y i+ Y +j J n Y i+ Y +j n 2 (P ij P i+ P +j ) 2 P i+ P +j Then ^Φ 2 = 2 n is a consistent estimator of Φ 2, but 0» Φ 2» minfi 1; J 1g n = Y ++ P ij = Y ij =n P i+ = Y i+ =n P +j = Y +j =n 533 534 Pearson's measure of mean square contingency Cramer's V Note that 0» P» r I 1 I Estimation: vu u P = t Φ2 Φ 2 + 1 for I I tables vu ^P = u 2 t 2 + n vu V = u t Esitmation: vu ^V = u t Φ 2 minfi 1; J 1g 0» V» 1 2 =n minfi 1; J 1g 535 536
/* This program is stored as assoc.sas */ Example: Association between diagnosis and treatment prescribed by psychiatrists in New Haven, Conn. (1950) /* This program uses PROC FREQ in SAS to test for independence between diagnosis and perscribed Treatment Psycho- Organic Custodial therapy Therapy Care Affective 30 102 280 Alcoholic 48 23 20 Organic 19 80 75 Schizophrenic 121 344 382 Senile 18 11 141 treatment in the 1950 New Haven study. */ DATA SET1; INPUT D T ; LABEL D = DIAGNOSIS T = TREATMENT; CARDS; 537 1 1 30 1 2 102 538 5 2 11 5 3 141 RUN; PROC PRINT DATA=SET1; TITLE 'DATA FOR THE NEW HAVEN STUDY'; PROC FORMAT; VALUE DFMT 1='AFFECTIVE' 2='ALCOHOLIC' 3='ORGANIC' 4='SCHIZOPHRENIC' 5='SENILE'; VALUE TFMT 1='PSYCHOTHERAPY' 2='ORGANIC THERAPY' 3='CUSTODIAL CARE'; RUN; 539 PROC FREQ DATA=SET1 ORDER=INTERNAL; TABLES D*T / CHISQ MEASURES SCORES=TABLE ALPHA=.05 NOPERCENT NOCOL EPECTED; WEIGHT ; FORMAT D DFMT.; FORMAT T TFMT.; TITLE 'ANALYSIS OF THE NEW HAVEN DATA'; RUN; 540
ANALYSIS OF THE NEW HAVEN DATA DATA FOR THE NEW HAVEN STUDY Table of D by T Obs D T D(DIAGNOSIS) T(TREATMENT) 1 1 1 30 2 1 2 102 3 1 3 280 4 2 1 48 5 2 2 23 6 2 3 20 7 3 1 19 8 3 2 80 9 3 3 75 10 4 1 121 11 4 2 344 12 4 3 382 13 5 1 18 14 5 2 11 15 5 3 141 Frequency Expected Row Pct PSYCHO- ORGANIC CUSTODIAL Total THERAPY THERAPY CARE AFFECTIVE 30 102 280 412 57.398 136.2 218.4 7.28 24.76 67.96 ALCOHOLIC 48 23 20 91 12.678 30.083 48.24 52.75 25.27 21.98 ORGANIC 19 80 75 174 24.241 57.521 92.238 10.92 45.98 43.10 541 542 SCHIZOPHRENIC 121 344 382 847 118 280 449 14.29 40.61 45.10 SENILE 18 11 141 170 23.684 56.198 90.118 10.59 6.47 82.94 Total 236 560 898 1694 Statistics for Table of D by T Statistic DF Value Prob Chi-Square 8 259.9367 <.0001 Likelihood Ratio Chi-Square 8 238.6614 <.0001 Mantel-Haenszel Chi-Square 1 3.0511 0.0807 Phi Coefficient 0.3917 Contingency Coefficient 0.3647 Cramer's V 0.2770 Statistic Value ASE Gamma -0.0066 0.0345 Kendall's Tau-b -0.0042 0.0219 Stuart's Tau-c -0.0040 0.0206 Somers' D C R -0.0040 0.0206 Somers' D R C -0.0045 0.0233 Pearson Correlation -0.0425 0.0235 Spearman Correlation -0.0067 0.0244 Lambda Asymmetric C R 0.0415 0.0184 Lambda Asymmetric R C 0.0000 0.0000 Lambda Symmetric 0.0201 0.0090 Uncertainty Coefficient C R 0.0721 0.0091 Uncertainty Coefficient R C 0.0537 0.0067 Uncertainty Coefficient Sym 0.0616 0.0077 Sample Size = 1694 543 544
Proportional Reduction in Error (PRE) PRE = 2 6 4 minimum prob. of erroneous prediction assuming independence 3 7 5 2 6 4 minimum prob. of erroneous prediction under the alternative 2 minimum probability of 6 4 erroneous prediction assuming independence 3 7 5 3 7 5 with estimate (1 P +;max ) (1 I ^ C=R = 1 P +;max P i;max ) i=1 The large sample variance is estimated as Predicting the column category from the row category C=R = (1 ß +;max ) (1 I 1 ß +;max ß i;max ) i=1 ^ff 2 1(^ C=R) = 1 N 2 (1 I i=1 6 4 P i;max )( i P i;max + P +;max 2 (1 P +;max ) 3 Λ i P i;max ) 3 7 5 545 546 New Haven (1950) study: P 11 = :0177 P 12 = :0602 P 13 = :1653 ψ P 1;max P 21 = :0283 P 22 = :0136 P 23 = :0118 P 31 = :0112 P 32 = :0472 P 32 = :0443 P 41 = :0714 P 42 = :2031 P 42 = :2255 P 51 = :0106 P 52 = :0065 P 52 = :0832 P +1 = :1393 P +2 = :3306 P +3 = 5301 ψ P +;max ^ R=C = = 0 and (1 :5000) (1 [:0714 + :2031 + 2255]) (1 :5000) ^ff 2 1(^ R=C ) = 0 ^ C=R = (1 :5301) (1 (:1653 + :0283 + :0472 + :2255 + :0832)) = :4699 :0505 = 041 :4699 ^ff1;^ C=R = 018 (1 :5301) In this case the 4-th row contains the largest proportion of patients for each column. So it appears that C=R > 0. 547 548
Properties: 1. 0» C=R» 1 2. C=R = 0 if knowledge of the row category does not improve the probability of correctly predicting column category 3. C=R = 1 0 1/12 0 1/3 0 0 0 1/2 0 0 0 1/12 549 550 4. C=R is not always equal to R=C 5. C=R is not affected by permuting rows and/or columns. This makes C=R a suitable measure for tabales defined by nominal variables. Other measures of association for nominal categorical variables: Agresti(1990, pp 22-26) Lloyd(1999, pp 70-71) Concentration coefficient: ß 2 ij =ß i+ ß 2 +j i j j fi C=R = 1 ß 2 +j j Uncertainty coefficient: U C=R = i j ß ij log(ß ij =ß i+ ß+j) j ß+j log(ß+j) 551 552
Measures of association for I J tables defined by two ordinal variables. Example: Each student in a random" sample of N = 21 sociology students at the Univ. of Michigan was cross-classified with respect to responses to two items. Concern with Willingness proper behavior to join an organization Low moderate high Low 22 20 5 47 moderate 26 60 27 113 high 8 31 18 57 50 111 56 553 554 Consider the responses to these two items for two different students (i 1 ; j 1 ) gives the row and column categories for the responses of student 1. (i 2 ; j 2 ) gives the row and column categories for the responses of student 2. Concordant Pairs Low! High Low 22 # 60 27 High 31 18 22(60 + 27 + 31 + 18) Concordant pair of students: either i 1 > i 2 j 1 > j 2 or i 1 < i 2 j 1 < j 2 Discordant pair of students: either i 1 > i 2 j 1 > j 2 or i 1 < i 2 j 1 < j 2 Neither concordant nor discordant: i 1 = i 2 and/or j 1 = J 2 555 20 27 18 20(27 + 18) 556
Low! High 5 Low 22 # 60 none High 18 60(66 + 18) 26 31 18 22 20 27 26(31 + 18) 27(22 + 20) 557 558 8 22 26 None 31 22 20 26 60 31 18(22 + 20 +26 + 60) 31(22 + 26) 559 560
Discordant Pairs 22 26 60 8 31 5 None 5(26 + 60 + 8 + 31) 26 20 26 20 5 8 20(26 + 8) 26(20 + 5) 561 562 8 31 27 60 5 27(8 + 31) 8 60(5 + 8) 8 20 5 60 27 8(20 + 5 + 60 + 27) 563 564
5 27 31 31(5 + 27) Number of concordant pairs: 2 P = I J 6 Y ij 4 Y k` + i=1 j=1 k>i `>j k<i `<j = 12; 492 Y k`3 7 5 None 18 Number of discordant pairs: 2 Q = I J 6 Y ij 4 Y k` + i=1 j=1 k>i `<j k<i `>j = 5; 676 Y k`3 7 5 565 566 Concern with Willingness proper behavior to join an organization Low moderate high Low 22 20 5 47 moderate 26 60 27 113 high 8 31 18 57 50 111 56 Kendall's Tau b : fi b = r N(N 1) i Y i+(y i+ 1) s P Q N(N 1) j Goodman-Kruskal Gamma: ^fl = P Q P + Q = Y +j(y +j 1) 12; 492 5; 676 12; 492 + 5; 676 = 0:375 567 568
Properties of ^fl: Large sample variance estimate: ^ff 2 1;^fl = h 16 (P +Q) 4 j j P k>i Y ij nq k>i `<j Y k` + k<i `>j Y k` + k<i 2 i Y k` o `>j `<j Y k` 1. ^fl = 1 2. ^fl = 1 Low! High Low 0 0 0 # 10 0 0 # 0 15 0 High 0 0 30 Low! High Low 0 18 0 # 27 0 0 # 0 0 0 High 0 0 0 569 570 3. ^fl = 0 for the case where P ij = P i+ P +j. Also for other cases, e.g., 4. j^flj j^fi b j 10 0 10 0 47 0 10 0 10 Spearman's rho (corrected for ties) ^ρ = 1 12 h Y ij i j k<i vu u t h N 3 N i Y k+ + Y i+ 2 N 2 Y 3 i+ Y i+ ih i h `<j Y +e + Y i +j N 2 2 i N 3 N J Y 3 +j Y +j j=1 5. For 2 2 tables, ^fl is the estimate of Yule's Q. 571 572
Product-moment correlation coefficient: i 1 j1 r = vu u t I Y i+ i1 I J Y ij N (r i μr)(c j μc) vu N (r i μr) 2 u t J j=1 Y +j N (c j μc) 2 where r 1 ; r 2 ; ; r I are the row scores and c 1 ; c 2 ; ; c J are the column scores μr = 1 N μc = 1 N I Y i+ r i i=1 J Y +j c j j=1 Measure of agreement for I I tables Judge 2 A B C D A ß 11 ß 12 ß 13 ß 14 ß 1+ B ß 21 ß 22 ß 23 ß 24 ß 2+ Judge 1 C ß 31 ß 32 ß 33 ß 34 ß 3+ D ß 41 ß 42 ß 43 ß 44 ß 4+ ß +1 ß +2 ß +3 ß +4 Consider r i = i c j = j i = 1; 2; : : : ; I j = 1; 2; : : : ; J 573 574 Cohen's Kappa: i=1 K = where 1 = I 2 = I I ß ii I 1 I ß ii i=1 i=1 ß i+ ß +i i=1 1 2 = 1 2 ß i+ ß +i i=1 actual" probability of agreement ß i+ ß +i probability of chance agreement for independent classificatons Estimation: ^K = = I i=1 P ii I i=1 P i+ P +i 1 I i=1 P i+ P +i N P i Y ii Y i+ Y+i i N 2 Y i+ Y+i i 575 576
Estimate of the large sample variance: ^ff 2 1;^k ^ 1 = i ^ 2 = i ^ 3 = i ^ 4 = i P ii P i+ P+i P ii (P i+ + P+i) j P ij (P+i + P j+ ) 2 = 1 N [^ 1(1 ^ 1) (1 ^ 2) 2 + 2(1 ^ 1)(2^ 1^ 2 ^ 3) (1 ^ 2) 3 + (1 ^ 1) 2 (^ 4 4^ 2 2 ) (1 ^ 2) 4 ] Rating of student teachers by two supervisors (Gross, 1971, BFH, Chap. 11) Supervisor 2 Authori- Supervisor 1 tarian democratic Permissive Authoritarian 17 4 8 29 Democratic 5 12 0 17 Permissive 10 3 13 26 32 19 21 N = 72 ^» = 0:361 ^ff 1;^» = p :0084 = :0915 577 578 Properties of Kappa: An approximate 95% confidence interval for Kappa: 1. ^» = 0 if P ii = P i+ P+i ; i = 1; 2; : : : ^» ± (1:96)^ff 1;^» :361 ± :179 2. 1» ^»» 1 3. ^» = 1 if I i=1 P ii = 1 ) (:18; :54) P11 0 0 0 P22 0 0 0 P33 579 580
40 9 49 ^» = :70 P 11 + P 22 = :85 6 45 51 46 54 P i+ P+i ^» = i 1 P i+ P+i i when there is no agreement". Kappa is sensitive to marginal distributions. 80 10 90 ^» = :32 P 11 + P 22 = :85 5 5 10 85 15 45 15 60 ^» = 0:13 P 11 + P 22 = :60 25 15 40 70 30 25 35 60 ^» = 0:26 P 11 + P 22 = :60 5 35 40 30 70 581 582 Weighted Kappa /* This program is stored as kappa.sas */ w ij ß ij w ij ß i+ ß+j i j i j» w = 1 w ij ß i+ ß+j j j Choices of weights: w ij = w ij = 8 >< >: 8 >< >: 1 i = j ) Kappa 0 i 6= j 1 i = j 1=2 j = i + 1 or j = j 1 0 otherwise (i j)2 w ij = 1 (I 1) 2 583 /* First Use PROC FREG in SAS to compute kappa for the student teacher ratings. There are two options for specifying weights */ data set1; input sup1 sup2 count; cards; 1 1 17 1 2 4 1 3 8 2 1 5 2 2 12 2 3 0 3 1 10 3 2 3 3 3 13 run; 584
proc format; value rating 1=Authoritarian 2=Democratic 3=Permissive; run; proc freq data=set1; tables sup1*sup2 / agree(wt=ca) printkwt alpha=.05 nocol norow; weight count; format sup1 rating. sup2 rating.; run; proc freq data=set1; tables sup1*sup2 / agree(wt=fc) printkwt alpha=.05 nocol norow; weight count; format sup1 rating. sup2 rating.; run; /* This part of the program uses PROC IML in SAS to compute either Kappa or a weighted kappa and the corresponding standard errors. It is applied to the student teacher rating data. */ PROC IML; START KAPPA; /* ENTER TABLE OF COUNTS */ = 17 4 8, 5 12 0, 10 3 13}; /* ENTER THE TABLE OF WEIGHTS; USE AN IDENTITY MATRI IF YOU DO NOT WANT WEIGHTED KAPPA */ W = 1.0 0.5 0.0, 0.5 1.0 0.5, 0.0 0.5 1.0 }; 585 586 /* BEGINNING OF MODULE TO COMPUTE KAPPA AND WEIGHTED KAPPA */ /* COMPUTE NUMBER OF ROWS AND NUMBER OF COLUMNS FOR */ NR = NROW(); NC = NCOL(); /* COMPUTE ROW AND COLUMN TOTALS FOR THE MATRI OF COUNTS */ R = (,+ ); C = ( +, ); /* COMPUTE TABLE OF EPECTED COUNTS FOR INDEPENDENT RANDOM AGREEMENT */ E = R*C; /* COMPUTE OVERALL TOTAL COUNT */ T = SUM(); 587 /* COMPUTE KAPPA */ K1 = SUM(DIAG())/T; K2 = SUM(DIAG(E))/(T**2); K3 = SUM(DIAG()*DIAG(R+C`))/(T**2); J1 = J(1,NR); J2 = J(NC,1); TT1 = ((C`*J1+J2*R`)##2)#; K4 = SUM(TT1)/(T**3); KAPPA = (K1 - K2)/(1 - K2); /* COMPUTE STANDARD ERRORS: S1 DOES NOT ASSUME INDEPENDENCE: S2 ASSUMES THE NULL HYPOTHESIS OF INDEPENDENCE IS TRUE */ S1 = (K1*(1-K1)/((1-K2)**2)+2* (1-K1)*(2*K1*K2-K3)/((1-K2)**3)+ ((1-K1)**2)*(K4-4*K2*K2)/ ((1-K2)**4))/T; S1 = SQRT(S1); S2 = (K2 + K2*K2 -(SUM(DIAG(E)*DIAG(R+C`)) /(T**3)))/(T*(1-K2)**2); S2 = SQRT(S2); 588
/* COMPUTE WEIGHTED KAPPA */ W = #W; EW = E#W; WR = (W*C`) / T; WC = (R`*W) / T; KW1 = SUM(W)/T; KW2 = SUM(EW)/(T**2); KAPPAW = (KW1 - KW2)/(1 - KW2); TT2 = (WR*J2`+J1`*WC); TT3 = (W*(1-KW2)-TT2*(1-KW1))##2; /* COMPUTE STANDARD ERRORS: SW1 DOES NOT ASSUME INDEPENDENCE: SW2 ASSUMES THE NULL HYPOTHESIS OF INDEPENDENCE IS TRUE */ SW1 = SUM(#TT3)/T; SW1 = (SW1 -(KW1*KW2-2*KW2+KW1)##2) /(T*(1-KW2)**4); SW1 = SQRT(SW1); SW2 = (W-TT2)##2; SW2 = ((SUM(E#SW2)/T**2)-(KW2##2)) /(T*(1-KW2)**2); SW2 = SQRT(SW2); 589 /* COMPUTE 95% CONFIDENCE INTERVALS AND TESTS OF THE HYPOTHESIS THAT THERE IS ONLY RANDOM AGRREMENT */ TK = KAPPA/S2; TKW = KAPPAW/SW2; TT4 = TK**2; PK = 1 - PROBCHI(TT4,1); TT4 = TKW**2; PKW = 1 - PROBCHI(TT4,1); CKL = KAPPA - (1.96)*S1; CKU = KAPPA + (1.96)*S1; CKWL = KAPPAW - (1.96)*SW1; CKWU = KAPPAW + (1.96)*SW1; 590 /* PRINT RESULTS */ The FREQ Procedure PRINT,,,"Unweighted Kappa statistic " KAPPA; PRINT," Standard error " S1; PRINT,"95% confidence interval (" CKL "," CKU ")"; PRINT,,,"Standard error when there "; PRINT "is only random agreement " S2; PRINT,,,"P-value for test of "; PRINT "completely random agreement" PK; PRINT,,," Weighted Kappa statistic " KAPPAW; PRINT," Standard error " SW1; PRINT,"95% confidence interval (" CKWL "," CKWU ")"; PRINT,,,"Standard error when there"; PRINT "is only random agreement " SW2; PRINT,,,"P-value for test of "; PRINT "completely random agreement" PKW; FINISH; RUN KAPPA; Table of sup1 by sup2 Frequency Percent Authori Democra Permiss Total tarian tic ive Authoritarian 17 4 8 29 23.61 5.56 11.11 40.28 Democratic 5 12 0 17 6.94 16.67 0.00 23.61 Permissive 10 3 13 26 13.89 4.17 18.06 36.11 Total 32 19 21 72 44.44 26.39 29.17 100.00 591 592
Statistics for Table of sup1 by sup2 Test of Symmetry Statistic (S) 3.3333 DF 3 Pr > S 0.3430 Kappa Coefficient Weights (Fleiss-Cohen Form) Authori Democra Permiss tarian tic ive Kappa Coefficient Weights Authori Democra Permiss tarian tic ive Authoritarian 1.0000 0.7500 0.0000 Democratic 0.7500 1.0000 0.7500 Permissive 0.0000 0.7500 1.0000 Authoritarian 1.0000 0.5000 0.0000 Democratic 0.5000 1.0000 0.5000 Permissive 0.0000 0.5000 1.0000 Kappa Statistics Statistic Value ASE 95% Confidence Limits Simple Kappa 0.3623 0.0907 0.1844 0.5401 Weighted Kappa 0.2842 0.1042 0.0800 0.4883 593 Kappa Statistics Statistic Value ASE 95% Confidence Limits Simple Kappa 0.3623 0.0907 0.1844 0.5401 Weighted Kappa 0.2156 0.1250-0.0295 0.4606 Sample Size = 72 594 KAPPA Unweighted Kappa statistic 0.3622675 KAPPAW Weighted Kappa statistic 0.2841756 S1 Standard error 0.0907466 SW1 Standard error 0.1041712 CKL CKU 95% confidence interval ( 0.1844041, 0.5401309 ) CKWL CKWU 95% confidence interval ( 0.0800002, 0.4883511 ) Standard error when there S2 is only random agreement 0.0836836 Standard error when there SW2 is only random agreement 0.0962934 P-value for test of PK completely random agreement 0.000015 P-value for test of PKW completely random agreement 0.003166 595 596
# This file contains Splus code # for computing a Kappa statistic, # or weighted Kappa statistic, # standard errors and confidence # intervals. It is applied to the # student teacher data. # The file is stored as kappa.ssc # Enter the observed counts x <- matrix(c(17, 4, 8, 5, 12, 0, 10, 3, 13), 3, 3, byrow=t) # Enter the weights w<-matrix(c(1.0, 0.5, 0.0, 0.5, 1.0, 0.5, 0.0, 0.5, 1.0), 3, 3, byrow=t) # Compute expected counts for random # agreement n <- sum(x) xr <- apply(x, 1, sum) xc <- apply(x, 2, sum) one <- rep(1, length(xr)) e <- outer(xr, xc)/n # Compute Kappa k1 <- sum(diag(x))/n k2 <- sum(diag(e))/n k3 <- sum(diag(x)*diag(xr+xc))/(n*n) k4 <- sum(((outer(xc, one)+ outer(one, xr))**2)*x)/(n**3) kappa <- (k1-k2)/(1-k2) 597 598 # Compute weighted Kappa # Compute standard errors: # s1 does not assume random agreement # s2 assumes only random agreement s11 <- (k1*(1-k1)/((1-k2)**2)+2*(1-k1)* (2*k1*k2-k3)/((1-k2)**3)+ ((1-k1)**2)*(k4-4*k2*k2)/((1-k2)**4))/n s1 <- s11**.5 s22 <- (k2+k2*k2-(sum(diag(e)*diag(xr+xc)) /(n**2)))/(n*(1-k2)**2) s2 <- s22**.5 599 xw <- x*w ew <- e*w wr <- apply(w*xc, 2, sum)/n wc <- apply(w*xr, 2, sum)/n kw1 <- sum(xw)/n kw2 <- sum(ew)/n tt2 <- outer(wr, one)+outer(one, wc) tt3 <- ((w*(1-kw2))-(tt2*(1-kw1)))**2 kappaw <- (kw1-kw2)/(1-kw2) # Compute standard errors: # sw11 does not assume random agreement # sw22 assumes only random agreement sw11 <- sum(x*tt3)/n sw11 <- (sw11-(kw1*kw2-2*kw2+kw1)**2)/ (n*(1-kw2)**4) sw1 <- sw11**.5 sw22 <- (w-tt2)**2 sw22 <- ((sum(e*sw22)/n)-(kw2**2))/ (n*(1-kw2)**2) sw2 <- sw22**.5 600
# print results # Construct 95% confidence intervals # and tests for random agreement tk <- kappa/s2 tkw <- kappaw/sw2 tt4 <- tk**2 pk <- (1-pchisq(tt4, 1)) tt4 <- tkw**2 pkw <-(1-pchisq(tt4, 1)) ckl <- kappa-(1.96)*s1 cku <- kappa+(1.96)*s1 ckwl <- kappaw-(1.96)*sw1 ckwu <- kappaw+(1.96)*sw1 cat(" n", " Unweighted Kappa = ", signif(kappa,5)) cat(" n", " Standard error = ", signif(s1,5)) cat(" n", "95% confidence interval: ", signif(ckl,5), signif(cku,5)) cat(" n", "p-value for test of random ", "agreement = ", signif(pk,5)) cat(" n", " Weighted Kappa = ", signif(kappaw,5)) cat(" n", " Standard error = ", signif(sw1,5)) cat(" n", "95% confidence interval: ", signif(ckwl,5), signif(ckwu,5)) cat(" n", "p-value for test of random ", " agreement = ", signif(pkw,5)) 601 602 You can source this code into S-PLUS and obtain the results by issuing the following command at the prompt in the S-PLUS command window: source("yourdirectory/kappa.ssc") The results are shown below. Unweighted Kappa = 0.36227 Standard error = 0.090747 95% confidence interval: 0.1844 0.54013 p-value for test of random agreement = 1.4978e-005 Weighted Kappa = 0.28418 Standard error = 0.10417 95% confidence interval: 0.08 0.48835 p-value for test of random agreement = 0.00316 603