Sensitivity and specificity is one approach for quantifying the diagnostic accuracy of a test. In clinical practice, however, the test result is all that is known. We want to know how good the test is at predicting an abnormality. How many patients with abnormal test results are truly abnormal? The sensitivity and specificity do not answer this question. Values of test sensitivity and specificity derived in one clinical population cannot necessarily be used to make predictions about a different population. Test sensitivity increases with increasing severity of disease. What this means is that there is a distribution of sensitivities and specificities across the spectrum of patients. The values of sensitivity and specificity are actually average values across the population. In addition to knowing the test's average sensitivity and specificity, the clinician must be aware of how the test performs in different segments of the population. As the prevalence falls, the positive predictive value falls and the negative predictive value rises. Clinicians will, on average, learn the most from a clinical sign, symptom, or laboratory test when the likelihood of disease is 40%–60%. If the prevalence of disease is very low, the positive predictive value will not be close to one even if the sensitivity and specificity are high. Thus, in screening the general population, it is inevitable that many people with positive test results will not have the disease.