In this study, we examined how reports of clinical trials use selection criteria for reporting non-systematic AEs. We found that public sources and CSR-synopses applied selection criteria while the CSRs reported all AEs; consequently, most AEs and serious AEs were not reported in public sources. In public sources, every combined selection criterion we found included a numerical threshold; however, numerical thresholds were not consistent across trials or across sources for individual trials. Some selection criteria also included requirements related to the participant group and the difference between groups.
We found no evidence in public sources (e.g., journal articles) or non-public sources (i.e., CSRs) that any trial used pre-specified selection criteria. We assume that public sources reporting a small number of AEs used selection criteria; however, many sources did not describe the selection criteria they used. Among public sources that did not describe selection criteria, we could not determine why the authors reported some non-systematic AEs and not other non-systematic AEs. Even when selection criteria were described, public sources did not explain why the authors applied those selection criteria (e.g., following a pre-specified statistical analysis plan).
When we applied various selection criteria to eligible trials and compared what would be reported, we observed meaningful differences in the number of AEs that would be reported. Trialists might have used selection criteria to identify the most important AEs; however, we found no evidence that patients or clinicians we involved in deciding which AEs would be reported in public sources. Instead, it appears that public sources included the most common AEs. Trialists could have performed analyses similar to ours in which they applied different selection criteria and then decided post hoc which selection criteria would allow them to report, or not report, particular AEs. In a previous study, we found that trialists did not “group” AEs for reporting , which would be another method to consolidate AEs for reporting; however, grouping AEs can also disguise important AEs by combining them with less important AEs.
Evidence syntheses (e.g., systematic reviews, clinical practice guidelines) could help identify rare AEs if all observed AEs were available for all trials ; however, rare AEs cannot be identified when clinical trials report only those AEs occurring above numerical thresholds. Just as selectively reporting potential benefits based on quantitative results leads to biased meta-analyses [40-43] selective reporting of AEs based on trial results will necessarily lead to biased overall estimates. Selection criteria make it impossible for systematic reviewers and meta-analysts to accurately assess all AEs caused by medical interventions.
We found that no public source reported all AEs for any trial. Only serious AEs and AEs occurring in more than 5% of participants are required to be reported in www.ClinicalTrials.gov according to the Food and Drug Administration Amendments Act of 2007 (FDAAA) [1 ,44]. Doctors and patients often rely on post-marketing surveillance studies to identify rare AEs, including serious AEs, yet these studies lack comparison groups that are present in clinical trials. It might be possible to detect rare AEs in clinical trials, and to calculate between-group differences, if investigators would not use numerical thresholds to determine which AEs to report. For common AEs, such as headache, investigators must ensure that data are collected consistently across intervention groups so that proportions in each group can be compared. This is difficult when AEs are collected non-systematically. Although systematically collecting AEs would result in more trustworthy and usable information about effects between groups [19 ,20], it is not always possible, or even desirable, to anticipate which AEs patients will experience.
Authors have previously discouraged the use of selection criteria [33 ,34]; however, trials would have to report hundreds or thousands of AEs if selection criteria were not used. Making CSRs and equivalent reports for non-industry trials public, and grouping AEs for analysis and reporting, would partially address the problem of reporting many AEs in journal articles and other public sources. A complete solution to the problems we have identified is not obvious.
Multiple sources of public AE information from trials (e.g., journal articles, FDA reviews), which may be written by different authors for different purposes, lead to inconsistent information for patients and doctors . For example, FDA reviewers have access to CSRs that report all AEs that occurred in each trial. FDA reviewers apply clinical and statistical judgment when deciding what to report in medical and statistical reviews about new drugs and biologics. FDA and manufacturers also decide what to include in prescribing information (drug “labels”) written for patients and doctors. By comparison, other stakeholders get their information about interventions from a variety of sources, and those sources may use different selection criteria for reporting. U.S. law requires that investigators report on www.ClinicalTrials.gov both serious AEs and non-serious AEs that occur in more than 5% of participants in applicable clinical trials . Individual authors and journal editors can decide what to report in journal articles, which vary tremendously. Different selection criteria across reports of clinical trials, including multiple reports of the same trial, lead to inconsistent and confusing information for stakeholders; consistent standards for reporting AEs, and open access to trial information (e.g., CSRs) could help.
Because current methods for selecting AEs lead to biased reporting that will necessarily produce incorrect effect estimates in systematic reviews and clinical practice guidelines, journals could require that trialists report all AEs on www.ClinicalTrials.gov or other registries. When it is not feasible to report all AEs in a given source (e.g., a conference abstract), the source could direct readers to additional information in a registry such as www.ClinicalTrials.gov. FDA and other regulators could make all AEs publicly available. If all AEs are not made available by regulators or by trialists, systematic reviewers and meta-analysts should interpret results with extreme caution and explain the limitations of using only publicly available data.
There is a pressing need to make clinical trial data available to the public, especially CSRs and IPD; however, sharing CSRs and IPD will not solve all problems identified in our research. First, reports describing hundreds of AEs might overwhelm doctors and patients by including “too much” information . Many AEs reported in CSRs and IPD are not intervention-related; selection criteria might have been used to help decision-makers identify AEs that are caused by medical products. Sharing lengthy and confusing reports and datasets could increase the appearance of transparency while actually disguising important information. Second, reanalyzing clinical trial data is time consuming and therefore expensive, and reanalysis should not be necessary to identify AEs. Most decision-makers want trustworthy summaries of clinical trials.
To avoid reporting bias, and to avoid overwhelming patients and doctors with information, trials could prespecify which AEs will be collected and reported. Core outcome sets for AEs could also improve the comparability of clinical trials. Core outcome sets could also identify those AEs that are most important to patients, not just the AEs that occurred most often [19 ,20 ,46].
It is a limitation that we included only two drugs-indications, and that we had non-public information for a minority of trials in each case study. The use of selection criteria, and the extent to which they are prespecified and consistent, might differ across companies and investigators. Nonetheless, the results of this methodological investigation highlight fundamental problems with methods currently used to report adverse events that occur in clinical trials.