Other types of diagnostic tests, such as serum levels of enzymes,
produce results along a continuous scale of measurement. In such
a situation, there are many more options about where to set the cutoff
point between a positive and a negative test result. Along the continuous
scale of measurement, different cutoff points will result in differing
levels of sensitivity or specificity. As a general rule, as the
cutoff point rises, the sensitivity will increase, with a corresponding
decrease in the specificity. A convenient summary of this relationship
can be shown in a graph referred to as a receiver
operating characteristic (ROC) curve. This curve derives its
name from its first application—measuring the ability of
radar operators to distinguish radar signals from noise. For the
purposes of diagnostic testing, a graph is constructed with sensitivity
(sometimes labeled as the true-positive rate) on the vertical axis,
and 1 – specificity (sometimes labeled as the false-positive
rate) on the horizontal axis. At each cutoff point, sensitivity
and 1 – specificity will be calculated. These results then
can be graphed along the full range of cutoff points, producing
the ROC curve. A hypothetical example of an ROC curve is shown in
Figure 6–6. In this graph, the performance of the diagnostic
test is shown by the solid line. The dashed diagonal line represents
a reference of a test with no diagnostic value. At every point along
this dashed line, the sensitivity is equal to 1 – specificity.
Note that when the sensitivity is equal to 1 – specificity,
the numerator of the LR+ is
equal to its denominator. That is to say, at every point along this
dashed diagonal line the LR+ is
equal to one, and a positive test result is equally likely for persons
with and without the disease of interest. A diagnostic test that
is clinically useful, therefore, will have an ROC curve that is
far from this dashed diagonal line.