Search results
Jump to navigation
Jump to search
Page title matches
- '''Inter-rater reliability''' or '''Inter-rater agreement''' is the measurement of agreement between r There are a number of statistics which can be used in order to determine the inter-rater reliability. Different4 KB (615 words) - 14:27, 18 February 2008
- 12 bytes (1 word) - 20:21, 3 November 2007
- 94 bytes (14 words) - 08:56, 4 September 2009
- Auto-populated based on [[Special:WhatLinksHere/Inter-rater reliability]]. Needs checking by a human.442 bytes (56 words) - 17:29, 11 January 2010
Page text matches
- Statistical measure of inter-rater reliability between two observers.106 bytes (11 words) - 20:01, 7 September 2009
- '''Inter-rater reliability''' or '''Inter-rater agreement''' is the measurement of agreement between r There are a number of statistics which can be used in order to determine the inter-rater reliability. Different4 KB (615 words) - 14:27, 18 February 2008
- Auto-populated based on [[Special:WhatLinksHere/Inter-rater reliability]]. Needs checking by a human.442 bytes (56 words) - 17:29, 11 January 2010
- {{r|Inter-rater reliability}}471 bytes (59 words) - 16:34, 11 January 2010
- ...bility]] between radiologists in detecting massive pulmonary emboli, the [[inter-rater reliability]] is only moderate for segmental or smaller emboli.<ref name="pmid19931759"7 KB (995 words) - 06:16, 12 December 2014
- ...kappa''' is a variant of [[Cohen's kappa]], a [[statistical]] measure of [[inter-rater reliability]]. Where Cohen's kappa works for only two raters, Fleiss' kappa works for a7 KB (939 words) - 12:41, 26 September 2007