Difference between revisions of "Fleiss' kappa"
(in from WP) 
(No difference)

Revision as of 17:43, 16 November 2006
Fleiss' kappa is a variant of Cohen's kappa, a statistical measure of interrater reliability. Where Cohen's kappa works for only two raters, Fleiss' kappa works for any constant number of raters giving categorical ratings (see nominal data), to a fixed number of items. It is a measure of the degree of agreement that can be expected above chance. Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, , can be defined as
The factor gives the degree of agreement that is attainable above chance, and, gives the degree of agreement actually achieved above chance. The scoring range is between 0 and 1. A value of 1 means complete agreement.
An example of the use of Fleiss' kappa may be the following: Consider fourteen psychiatrists are asked to look at ten patients. Each psychiatrist gives one of possibly five diagnoses to each patient. The Fleiss' kappa can be computed from this matrix (see example below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance. Fleiss' kappa has benefits over the standard Cohen's kappa as it works for multiple raters, and it is an improvement over a simple percentage agreement calculation as it takes into account the amount of agreement that can be expected by chance.
Contents
Equations
Let N be the total number of subjects, let n be the number of ratings per subject, and let k be the number of categories into which assignments are made. The subjects are indexed by i = 1, ... N and the categories are indexed by j = 1, ... k. Let n_{ij}, represent the number of raters who assigned the ith subject to the jth category.
First calculate p_{j}, the proportion of all assignments which were to the jth category:
Now calculate , the extent to which raters agree for the ith subject:
Now compute , the mean of the 's, and which go into the formula for :
Worked example
1  2  3  4  5  

1  0  0  0  0  14  1.000 
2  0  2  6  4  2  0.253 
3  0  0  3  5  6  0.308 
4  0  3  9  2  0  0.440 
5  2  2  8  1  1  0.330 
6  7  7  0  0  0  0.462 
7  3  2  6  3  0  0.242 
8  2  5  3  2  2  0.176 
9  6  5  2  1  0  0.286 
10  0  2  2  3  7  0.286 
Total  20  28  39  21  32  
0.143  0.200  0.279  0.150  0.229 
In the following example, fourteen raters () assign ten "subjects" () to a total of five categories (). The categories are presented in the columns, while the subjects are presented in the rows.
Data
See table to the right.
= 10, = 14, = 5
Sum of all cells = 140
Sum of = 3.780
Equations
For example, taking the first column,
And taking the second row,
In order to calculate , we need to know the sum of ,
Over the whole sheet,
Significance
Landis and Koch Template:Ref give the following table for interpreting the significance of the value. This table is however no means universally accepted as a guide for interpreting the value. It has been noted that this benchmark may be more harmful than helpfulTemplate:Ref, as the number of categories and subjects will affect the magnitude of the value. The kappa will be higher when there are fewer categories. Template:Ref
Interpretation  

< 0  No agreement 
0.0 — 0.19  Poor agreement 
0.20 — 0.39  Fair agreement 
0.40 — 0.59  Moderate agreement 
0.60 — 0.79  Substantial agreement 
0.80 — 1.00  Almost perfect agreement 
See also
Notes
 Template:Note Landis, J. R. and Koch, G. G. (1977) "The measurement of observer agreement for categorical data" in Biometrics. Vol. 33, pp. 159174
 Template:Note Gwet, K. (2001) Statistical Tables for InterRater Agreement. (Gaithersburg : StatAxis Publishing)
 Template:Note Sim, J. and Wright, C. C. (2005) "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements" in Physical Therapy. Vol. 85, pp. 257268
References
 Fleiss, J. L. (1971) "Measuring nominal scale agreement among many raters" in Psychological Bulletin, Vol. 76, No. 5 pp. 378382
Further reading
 Fleiss, J. L. and Cohen, J. (1973) "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability" in Educational and Psychological Measurement, Vol. 33 pp. 613619
 Fleiss J. L. (1981) Statistical methods for rates and proportions. 2nd ed. (New York: John Wiley) pp. 3846
External links
 Kappa: Pros and Cons contains a good bibliography of articles about the coefficient.