# Sensitivity and specificity  Main Article Discussion Related Articles  [?] Bibliography  [?] External Links  [?] Citable Version  [?] This editable Main Article is under development and subject to a disclaimer. [edit intro]

The sensitivity and specificity of diagnostic tests are based on Bayes Theorem and defined as "measures for assessing the results of diagnostic and screening tests. Sensitivity represents the proportion of truly diseased persons in a screened population who are identified as being diseased by the test. It is a measure of the probability of correctly diagnosing a condition. Specificity is the proportion of truly nondiseased persons who are so identified by the screening test. It is a measure of the probability of correctly identifying a nondiseased person. (From Last, Dictionary of Epidemiology, 2d ed)."

Successful application of sensitivity and specificity is an important part of practicing evidence-based medicine.

## Calculations

Two-by-two table for a diagnostic test
Disease
Present Absent
Test result Positive Cell A Cell B Total with a positive test
Negative Cell C Cell D Total with a negative test
Total with disease Total without disease

Many of these calculations can be done at http://statpages.org/ctab2x2.html.

### Sensitivity and specificity

${\mbox{Sensitivity of a test}}=\left({\frac {\mbox{Total with a positive test}}{{\mbox{Total }}with{\mbox{ disease}}}}\right)=\left({\frac {\mbox{Cell A}}{{\mbox{Cell A}}+{\mbox{Cell C}}}}\right)$ ${\mbox{Specificity of a test}}=\left({\frac {\mbox{Total with a negative test}}{{\mbox{Total }}without{\mbox{ disease}}}}\right)=\left({\frac {\mbox{Cell D}}{{\mbox{Cell B}}+{\mbox{Cell D}}}}\right)$ ### Predictive value of tests

The predictive values of diagnostic tests are defined as "in screening and diagnostic tests, the probability that a person with a positive test is a true positive (i.e., has the disease), is referred to as the predictive value of a positive test; whereas, the predictive value of a negative test is the probability that the person with a negative test does not have the disease. Predictive value is related to the sensitivity and specificity of the test."

${\mbox{Positive predictive value}}=\left({\frac {{\mbox{Total }}with{\mbox{ disease and a positive test}}}{\mbox{Total with a positive test}}}\right)=\left({\frac {\mbox{Cell A}}{{\mbox{Cell A}}+{\mbox{Cell B}}}}\right)$ ${\mbox{Negative predictive value}}=\left({\frac {{\mbox{Total }}without{\mbox{ disease and a negative test}}}{\mbox{Total with a negative test}}}\right)=\left({\frac {\mbox{Cell D}}{{\mbox{Cell C}}+{\mbox{Cell D}}}}\right)$ ## Summary statistics for diagnostic ability

While simply reporting the accuracy of a test seems intuitive, the accuracy is heavily influenced by the prevalence of disease. For example, if the disease occurred with frequency of one in one thousand, then simply guessing that all patients do not have disease will yield an accuracy of over 99%, whereas if the disease frequency were 999 in one thousand, the same guess would yield an accuracy near 1%.

With the arrival of many biomarkers that may be expensive diagnostic tests, much research has addressed how to summarize the incremental value of a new expensive test to existing diagnostic methods. The best method to compare diagnostic tests depends on whether the new test is to replace or add to the existing diagnostic test.

### Area under the ROC curve

The area under the receiver operating characteristic curve (ROC curve), AROC, or c-index has been proposed. The c-index varies from 0 to 1 and a result of 0.5 indicates that the diagnostic test does not add to guessing. Variations have been proposed.

### Bayes Information Criterion

The Bayes Information Criterion has been proposed by Schwarz in 1978.

### Diagnostic odds ratio

The diagnostic odds ratio (DOR) is based on the likelihood ratios.

Whereas the likelihood ratio is:

${\text{Likelihood ratio}}={\frac {\mbox{probability of test result with disease}}{\mbox{probability of same result without disease}}}$ The diagnostic odds ratio is:

${\text{Diagnostic odds ratio}}={\frac {\mbox{odds of test result with disease}}{\mbox{odds of same result without disease}}}$ Or the diagnostic odds ratio is:

${\text{Diagnostic odds ratio}}={\frac {\mbox{Likelihood ratio +}}{\mbox{Likelihood ratio -}}}$ For example:

• If the sensitivity and specificity are 95% and 80%, respectively (or vice versa) then the DOR = 71.
• If the sensitivity and specificity are both 95%, then the DOR = 361.

"The DOR ranges from 0 to infinity, with higher values indicating better discriminatory test performance. A value of 1 means that a test does not discriminate between patients with the disorder and those without it... The DOR does not depend on the prevalence of the disease."

### Sum of sensitivity and specificity

This easy metric is called the Gain in Certainty:

${\mbox{Gain in Certainty}}=\left({\mbox{sensitivity}}+{\mbox{specificity}}\right)$ It varies from 0 to 2 and a result of 1 indicates that the diagnostic test does not add to guessing.

Similarly, Youden's J index (J*), is:

${\text{Youdens index}}=\left({\mbox{sensitivity}}+{\mbox{specificity}}\right)-1$ The index is derived from:

${\text{Youdens index}}=1-\left({\mbox{false positive rate}}+{\mbox{false negative rate}}\right)$ ### Number needed to diagnose

The number needed to diagnose is:

${\text{Number Needed to Diagnose}}={\frac {1}{{\text{Sensitivity}}-(1-{\text{Specificity}})}}$ ${\text{Number Needed to Diagnose}}={\frac {1}{\text{Youdens index}}}$ ### Predictiveness curve

A graph of the predictiveness curve has been proposed.

### Proportionate reduction in uncertainty score

The proportionate reduction in uncertainty score (PRU) has been proposed.

### Integrated sensitivity and specificity

This measure has been proposed as an alternative to the area of the the receiver operating characteristic curve.

### Reclassification tables

This measure has been proposed as an alternative to the area of the the receiver operating characteristic curve. This method allows calculating a 'reclassification index' or 'reclassification rate', or 'net reclassification improvement' (NRI).

${\text{NRI}}=\ {\text{sum of:}}$ ${\frac {{\text{events reclassified higher}}-{\text{events reclassified lower}}}{\text{events}}}$ ${\text{and}}\$ ${\frac {{\text{nonevents reclassified lower}}-{\text{nonevents reclassified higher}}}{\text{nonevents}}}$ The NRI is analogous to Youden's J index and the Gain in Certainty which are both functions of the sum of the sensitivity and specificity. In the special case of two diagnostic tests that have binary results (e.g. normal and abnormal), the NRI is the same the Gain in Certainty of the first test minus the Gain in Certainty of the second test, or alternatively stated, the change in the sum of the sensitivity and specificity:

${\text{NRI}}{}_{\text{for tests with binary outcomes}}=\left({\text{Sensitivity}}+{\text{Specificity}}\right){}_{\text{Second test}}\ -\ \left({\text{Sensitivity}}+{\text{Specificity}}\right){}_{\text{First test}}$ Both the NRI, Youden's J, and the Gain in Certainty are measures that:

• Assume the importance of correctly classifying a abnormal patient is equally as important as correctly classifying a normal patient
• Sum two rates (sensitivity and specificity) rather than a weighted average the two rates based on the ratio of abnormal to normal patients.
• Summing helps compare two tests that were studied in settings with different prevalences of disease.
• However, the NRI may be seen as misleading as it is an index of reclassification and not a rate of reclassification. In the special case of a prevalence of disease of 50%, the index of reclassification is exactly double the rate of reclassification.

The clinical net reclassification improvement (CNRI) is a variation that is the NRI only for the subjects at intermediate risk of disease.

### Sequential scoring

Sequential scoring has been proposed in order to isolate the effect of a new, expensive diagnostic test.

## Threats to validity of calculations

Various biases incurred during the study and analysis of a diagnostic tests can affect the validity of the calculations. An example is spectrum bias.

Poorly designed studies may overestimate the accuracy of a diagnostic test.