Background: The COVID-19 has led to many studies of seroprevalence. A number of methods exist in the statistical literature to correctly estimate disease prevalence in the presence of diagnostic testmisclassification, but these methods seem to be less known and notroutinely used in the public health literature. We aimed to show how widespread the problem is in recent publications, and to quantify the magnitude of bias introduced when correct methods are not used.
Methods: We examined a sample of recent literature to determine how often public health researcher did not account for test performance in estimates of seroprevalence. Using straightforward calculations, we estimated the amount of bias introduced when reporting the proportion of positive test results instead of using sensitivity and specificity to estimate disease prevalence.
Results: Of the seroprevalence studies sampled, 80% failed to account for sensitivity and specificity. Expected bias is often more than is desired in practice, ranging from 1% to 10%.
Conclusions: Researchers conducting studies of prevalence should correctly account for test sensitivity and specificity in their statistical analysis.