Primary outcome: Feasibility of the study
A summary of the results for the feasibility of the study is reported in Table 2.
1. Implementation at participating hospitals
The intervention was implemented at the Hamilton sites, but not at the Montfort hospital in Ottawa. The PE diagnostic order set was initiated on October 28, 2019. The topic of PE diagnosis was discussed at the HHS ED physicians rounds and three educational podcasts were recorded and remain freely available for streaming and download.18–20 All emergency physicians received an email explaining the rationale for the intervention and its objective, accompanied by a Frequently Asked Question section and educational material (email text available in Appendix B). As a reminder to the ED physician group, we attached laminated stickers with a logo and the invitation to “rule out PE without CT” to each computer in their offices (Figure 2). A team of three nurse educators engaged in individual meetings with the ED nurses, explaining the aim of the intervention and the new workflow. A multidisciplinary team including managers, radiologists, emergency physicians, educators, radiation technologists, and nurses met monthly to review progress and problems arising with the new order set. A number of logistical changes were made including automatic population of the CTPA request form with the D-dimer result and estimated glomerular filtration rate for the radiology technicians and streamlining of process.
We collaborated with regular project meetings with the Montfort Hospital ED. In May 2019, our colleagues let us know they were not able to participate in the study due to lack of data access and resources.
2. Identification of the population
We found that the two selected CEDIS presenting complaints (chest pain and shortness of breath) only captured 70% of all CTPAs ordered by emergency physicians. Therefore, inclusion criteria were expanded to 13 presenting ED presenting complaints (Table 3), allowing us to capture 90% of the ED ordered CTPAs. The remaining 10% were dispersed among 40 presenting complaints, each one accounting for 0.1–0.9% of all CTPAs (Table S1).
3. Obtaining electronic data
We requested data from three sources: the hospital decision support services, an internal hospital research database and the eHealth Information Technology Services (eHITS) office. The first two sources were unable to provide timely data (at the end of each month). The eHITS department was able to provide us with the required data in the required turnaround time. After working together to define the database queries and to validate the data, we received the first finalized dataset in January 2019. The system is now automated with monthly updates.
4. Manual data extraction
We found the variable “ordering doctor” for the CTPAs, in the electronic medical record (EMR) was not accurate due to some scans being ordered by an admitting service but the order was logged under the emergency physician’s name. It was crucial for us to have accurate data for the ordering physician or the physician audit and feedback would lose credibility. Therefore, we manually checked the ordering physician data. Despite this increase in the workload, the manual extraction was always completed before the end of the following month.
5. Implementation of individual physician feedback
Audit and feedback intervention are effective for improving healthcare professionals' compliance with desired practice,21 and individual feedback has a potential additional role as compared to group feedback alone.22 The path toward the implementation of physician feedback proved to be challenging and many adaptations of the original plan were required. Initially, we aimed to provide individual feedback to physicians on a monthly base. When reviewed, we realized that the number of CTPAs ordered per month per physician was too low (range 0–6 CTPAs per month per physician). The proportion of inappropriate CTPAs would have been subject to enormous variability for very small variations in the actual number of non-appropriate CTPAs ordered. We decided to reduce the frequency of feedback from monthly to quarterly and the feedback was issued at the end of the first three months post implementation. An example of the physician feedback is reported in Figure 3.
6. Estimate of research assistant time
Based on the first three months, we calculated that a research assistant is required for twelve hours per week on the project. In addition, a physician should spend two hours per month to resolve queries. Preparing and checking individual feedback requires an additional six hours per month, on average. Therefore, the total amount of time required to complete these tasks was approximately 14 hours per week (less than two days); thereby, meeting the feasibility criterium.
Secondary outcomes: preliminary estimates of effect
The patients’ characteristics and outcome distribution are reported in Table 4. In total, 81,103 patients accessed the HHS EDs for one of the selected presenting complaints between January 1, 2018 and February 28, 2020 (70,932 in the before-intervention period and 10,171 after the intervention). Of these, 5,993 patients were tested for PE and 2,267 patients underwent CTPA or VQ scanning. 285 patients (0.4% of the study population) were diagnosed with acute PE.
7.2% and 8.8% of the study population were tested for PE before and after the intervention, respectively. There was an 8.1% (95% CI 5.0; 11.2) increase in the adherence to the proposed protocol. The imaging positive yield showed a trend toward reduction (-2.6%, 95%CI -13.1; 8.0). The time trends for PE testing is reported in Figure 4 and the time trends for the remaining secondary outcomes are reported in the appendix (Figures S1-S6).
Secondary outcomes: balancing measures
In our study population, a D-dimer was ordered in 6.6% and 8.5% of the patients before and after the intervention, respectively.