++
Uh, oh … Here comes the math. This section is important, especially the concepts of positive and negative predictive values (PPV and NPV) and the concepts of sensitivity and specificity. You need not do the math if you do not want to (although it is simple). Here is a summary:
++
Sensitivity: How often the test will pick up the disease if it is there. Sensitivity = true positives/(true positives + false negatives). Note that the sum of true positives + false negatives represents all of the people with disease.
++
Specificity: Specificity is defined as the proportion of patients who do not have the disease and who will test negative for it. Specificity = true negatives/(true negatives + false positives). Note that the sum of true negatives + false positives represents all of the people who do not have disease.
++
Positive predictive value: The probability that someone with a positive test actually has the disease. This takes the prevalence of a disease into account. For example, an individual with a positive HIV test who is an IV drug user is more likely to really have the disease than a clean-living nun with a positive HIV test. In the nun, the test is more likely to be a false positive.
++
++
Negative predictive value: The probability that someone with a negative test actually does not have the disease. Again, this takes the prevalence of the disease into account. So, for example, a negative HIV test in an IV drug user from Sub-Saharan Africa with a CD4 count of 150/mm3 and PCP is likely to be a false negative. Conversely, a negative HIV test in a nun, for example, is likely to be a true negative.
++
++
A new test, the "reception-o-meter," has been developed that can tell whether a cell phone will have reception in a given area (allegedly better than a guy walking around asking, "Can you hear me now?"). When compared with the gold standard of turning on your cell phone and checking whether you have reception or not, the new test has a sensitivity of 90% (will pick up a signal 90% of the time when there is one) and a specificity of 95% (there are only 5% false positives; thus, 95% of the time when the reception-o-meter says there is a signal, there will actually be one).
++
So how can you tell if the phone company is pulling a fast one or if this is a good test? You need to know the PPV of the test. In order to calculate the PPV, you need three pieces of data: the sensitivity of the test (how often the test will pick up the "disease" if it is there), the specificity of the test (how often you will get a false positive), and the prevalence of the condition, which in this case is the prevalence of having cell phone reception (in other words, the true amount of cell phone reception in a given area).
++
You are currently in Los Angeles, attending a CME course where the reception for carrier X is 99%. You check your "reception-o-meter" and it says you have coverage. But does this mean you have coverage?
++
In order to answer this question, you can use Bayes theorem or set up 2 × 2 tables. Here's the 2 × 2 table method. Begin by drawing a 2 × 2 table and filling in what you know. See Table 28-4 and Table 28-5.
++
++
++
++
++
If we have 100 phones, the data will look like that above.
++
So let's add actual numbers to the table (above). Let's use a population of 10,000. We multiply by the prevalence of reception to get the subpopulation totals. Ninety-nine percent of the population has reception (99% prevalence). So, 99% prevalence × 10,000 = 9,900 with reception; 1% × 10,000 = 100 without reception. Once we have these numbers, we simply multiply by the sensitivity and the specificity to get the exact cell numbers to plug into the table above: 9,900 × 90% sensitivity = 8,910 for cell A ("true positives"); 9,900 – 8,910 = 990 for cell C ("false negatives"); 100 × 95% specificity = 95 for cell D ("true negatives"); 100 – 95 = 5 for cell B ("false positives"). See Table 28-6.
++
++
Once the table is filled in, these numbers can then be used to calculate the PPV, using the equation above. In this case, a/(a + b) = 8,910/8,915 = 99.9%.
++
For those who prefer the Bayes theorem method, here's how this approach is done. Bayes theorem shows the relationships between sensitivity, specificity, prevalence, PPV, and NPV. The equation for PPV, derived from Bayes theorem, is shown as is the calculation based on the numbers from the question:
++
++
++
++
Question 28.6.1 What would the likelihood of not having coverage be if the "reception-o-meter" had said you did not have coverage (what is the NPV)?
++
++
++
++
++
++
++
++
++
++
++
+
++
Answer 28.6.1 The correct answer is "D." The question asks for the NPV—the likelihood of not having coverage if the reception-o-meter is negative. This also can be derived from Bayes theorem or calculated using a 2 × 2 table. For those of you who prefer the Bayes theorem method, the equation for NPV, derived from Bayes theorem, is shown as is the calculation based on the numbers from the question.
++
++
You are now in rural Russia where you were invited to help with community efforts to fight multidrug resistant tuberculosis. Here cell phone reception is 10% for Carrier Y. You check your "reception-o-meter" and it says you have reception.
++
++
++
Question 28.6.2 What is the likelihood that your cell phone actually will have reception if you try to make a call?
++
++
++
++
++
++
++
++
++
++
++
+
++
Answer 28.6.2 The correct answer is "C." You can use the 2 × 2 method or the Bayes theorem methods.
++
Here's what our 2 × 2 table looks like. See Table 28-7.
++
++
To convert to 10% prevalence, we start with a large baseline population and multiply by the prevalence to get the subpopulation totals (10% prevalence × 10,000 = 1,000 with reception; 90% × 10,000 = 9,000 without reception). Once we have the subpopulation totals, we multiply by the sensitivity and the specificity to get the exact cell numbers (1,000 × 90% sensitivity = 900 for cell "a"; 1,000 – 900 = 100 for cell "c" (or alternately 1,000 × 10% will get the same result for cell "c"); 9,000 × 95% specificity = 8,550 for cell "d"; 9,000 – 8,550 = 450 for cell "b").
++
These numbers can then be used to calculate the PPV, using the equation above. In this case, a/(a + b) = 900/(900 + 450) = 66.7% (rounds to 67%). Using Bayes theorem, the equation is as follows.
++
++
HELPFUL TIP:
A test that has a negative predictive value of 99% may sound good. But if only 1% of the population has the disease, doing no test will have a 99% negative predictive value.
++
++
++
Question 28.6.3 Still in that remote tuberculosis-infested region of Russia, we ask: What would the likelihood of having coverage be if the "reception-o-meter" said you did not have coverage?
++
++
++
++
++
++
++
++
++
++
++
+
++
Answer 28.6.3 The correct answer is "D." Again, you can use the 2 × 2 method or Bayes theorem. The 2 × 2 table for this question is the same as it was for the previous question (see Table 28-7). However, unlike previously, you are asked for the likelihood of reception if the "reception-o-meter" said there was no reception. In other words, you have been asked to calculate the false negative rate (FNR) for this scenario. The equation for the FNR is below.
++
++
You were not asked to calculate it, but there is also a false positive rate (FPR), which is shown below.
++
++
Cervical cancer is a disease in which early detection can make a great difference in halting disease progression. One screening procedure for this disease is the Papanicolaou ("Pap") smear. In a (fictional) study to assess the competency of technicians who read the Pap smear slides, a local lab checked their technician's work against patient records.
++
A total of 1,000 Pap smears were read. Of these, 100 patients had cervical abnormalities based on biopsy (gold standard). Of this group, 75 had abnormal (positive) Pap smears and 25 had negative Pap smears. There were 900 women without disease. Of these 900 women, 200 had positive Pap smears and 700 had negative Pap smears. Note that these are example numbers only, have no basis in reality, and do not reflect the actual sensitivities and specificities of Pap smears.
++
++
++
Question 28.6.4 Using the data above, which of the following is true about this survey of Pap smear technicians?
++
++
++
++
++
++
++
C) The sensitivity of the Pap test is 75%.
++
++
D) The specificity of the Pap test is 98%.
++
++
E) The prevalence of cervical cancer in this sample is 7.5%.
+
++
Answer 28.6.4 The correct answer is "C." The sensitivity of the test is 75%. Setting up the data in a 2 × 2 table, we are able to answer the question. See Table 28-8.
++
++
Sensitivity: Probability that a patient with the disease will have a positive result.
Sensitivity = (TP/(TP + FN)) = 75/100 = 0.75 or 75% sensitive.
++
Specificity: Probability that a patient without the disease will have a negative test.
Specificity = (TN/(FP + TN)) = 700/900 = 0.777 or about 78% specific.
++
FNR: Patient has the disease but the test is negative.
FNR = (FN/(TP + FN)) = 25/100 = 25% FNR. Also calculated as 1 – sensitivity.
++
FPR: The patient has a positive test but does not have the disease.
FPR = (FP/(FP + TN)) = 200/900 = 0.22 or 22% false positive. Also calculated as 1 – specificity.
++
We are going to make another assumption here about the prevalence of disease. The prevalence is the proportion of individuals who have the disease at any point in time. One way to describe it is as follows: prevalence = ((TP + FN)/Total population) = 100/1,000 = 10% or prevalence of 100 per 1,000 people.
++
++
++
Question 28.6.5 Given the above results of the Pap smear screening tests and if the prevalence of cervical abnormalities among women is 10%, then applying Bayes theorem, we find:
++
++
++
++
++
++
++
++
++
D) Unable to solve the problem with data provided.
++
++
+
++
Answer 28.6.5 The correct answer is "E." The prevalence of a disease is the proportion of individuals who have the disease at a given point in time ((TP + FN)/(Total population) = 0.1 or 10%).
++
The PPV of a test is the probability that a disease exists given a positive test result= TP/(TP + FP) or 75/275 = 27%. So, a patient with a positive test result only has a 27% chance of actually having the disease because there are so many false positives.
++
The NPV of a test is the probability of no disease given a negative test result (TN/(FN + TN)) = 700/725 = 96%. So, a patient with a negative test has a 96% chance of not having the disease. This is because there are few false negatives compared with the size of the overall population. If, for example, there were 200 false negatives in the same population, the negative predictive value would be only 700/900 = 78%.
++
Recall that 100 out of 1,000 women had positive biopsies and thus had the disease regardless of what the Pap test said.
++
++
++
Question 28.6.6 How does the pretest probability of cervical abnormalities among women compare with the posttest probability?
++
++
++
A) Posttest probability is about three times greater than the pretest probability.
++
++
B) Pretest probability is about three times greater than the posttest probability.
++
++
C) Posttest probability is 10 times greater.
++
++
D) Pretest probability is 10 times greater.
++
++
E) The pretest and posttest probabilities are equal.
+
++
Answer 28.6.6 The correct answer is "A." The pretest probability is given above as 100/1,000 or 10%. We know that 10% of the population has the disease. The posttest probability is defined as the PPV. Remember from above, the PPV of a test is the probability that a disease exists given a positive test result = TP/(TP + FP) or 75/275 = 27%. Comparing the two results, pretest probability of 10% and posttest probability of 27%, we find that the posttest probability is about three times greater than the pretest probability. If answer "E" were correct and the pretest and posttest probabilities were equal, there would be no point in doing the test.
+++
Objectives: Did you learn to…
++
Define and calculate sensitivity and then apply it to data interpretation?
Define and calculate specificity and then apply it to data interpretation?
Calculate positive and negative predictive values?
Apply Bayes theorem to determine the utility of a test?
++
Clinical Pearls
A highly sensitive test helps to rule OUT disease; a highly specific test helps to rule IN disease.
A treatment that is statistically significantly superior to placebo may not offer a clinically significant benefit. Use clinical judgment when interpreting study results.
Compare number needed to treat (NNT) with number needed to harm (NNH) when considering therapies, rather than relying on relative risk reduction. The same calculation can be done for screening tests (e.g., number of women needed to screen to avoid one breast cancer death).
Do not draw conclusions from subgroup analyses. The only conclusion that can be drawn is, "This must be studied."
Recognize that the utility of a test is contingent upon the sensitivity and specificity of the test and the prevalence of disease in the population being tested. Therefore, a sensitive and specific test may have a low predictive value in a population with very low disease prevalence.
When evaluating a non-inferiority study, look for the "margin" used by the investigators. This is the maximum extent of clinical difference that will be considered non-inferior (e.g., a margin of 2 means twice as many events can occur in the experiment group and still be considered non-inferior).