Predictive values of tests in the "real world"
Today, Ben Goldacre of badscience discusses what the sensitivity and specificity of diagnostic tests mean when applied to real world situations. He includes two examples to illustrate how much (or how little) these figures tell you when considering real-world scenarios -- predicting true HIV positives in a population, and understanding data regarding the rate of homicides committed by people diagnosed with a psychiatric illness.
The background rate of disease (e.g. HIV) in a population has a significant influence on the meaning of a positive or negative test result - the process of incorporating data on the prevalence of disease in a given population is called a probability revision - converting from a pre-test probability of a disease (disease prevalence) to post-test probability (likelihood that you truly have the disease when you test positive).
The badscience post provides a number of links to additional reading on this issue.
Other related links:
- this previous post discussing problems clinicians have with interpreting diagnostic test results, which includes a link to a CDC case exercise that works through a probability revision in detail, for those who really want to get at the "math" behind this issue
- this BMJ article by Elstein and Schwarz, "Clinical problem solving and diagnostic decision making: selective review of the cognitive literature," which looks at the cognitive literature describing specific issues that cause clinicians to inappropriately interpret diagnostic test data (also these other articles in the BMJ series, "Evidence base of clinical diagnosis").