We conducted a nationwide, three staged, modified Delphi method from May to December 2019, in Canada. We developed a detailed survey eliciting the opinions of PICU healthcare providers on 1) the underlying principles that guide dosing and influence the consideration of dosing error, and 2) dosing thresholds for dosing errors, in pediatric critical care. The study reporting followed the STROBE guidelines (STROBE checklist, Supplement Table 1) (15).
The Delphi methodology was used to achieve consensus amongst a group of PICU experts, with multiple iterative rounds and anonymous feedback after each round (16). This methodology was favoured over other structured consensus methods as it did not require face-to-face group interaction, was questionnaire based, preserved anonymity of respondents, and has been frequently used and accepted for scientific consensus of this nature (17, 18). Furthermore, the electronic distribution of the survey allowed for the timely completion of a nationwide survey with multiple rounds for participants. Given that this Delphi was conducted across Canada, we opted for a modified Delphi approach. A modified Delphi differs from the classical Delphi method in that the expert panel was not involved in the initial process of generating principles; this initial step was replaced by local face-to-face focus groups (19).
We conducted deliberate purposive sampling of expert clinicians across Canada. Eligible clinicians were physicians, nurse managers and pharmacists who were currently working the majority of their time in a Canadian PICU, with at least 5 years PICU experience. Participants were contacted through the electronic mailing list of the Canadian Critical Care Trials Group (CCCTG), asking for their interest in participating. From this mailing list, we then further attempted to solicit at least one eligible physician, nurse and pharmacist from each center, by asking CCCTG members to refer participants who may be interested. Participation on the expert panel was voluntary. No incentives were offered.
An a priori sample size of 40 expert healthcare providers was established (8, 10, 17, 20). This is slightly higher than previous pharmacological studies using Delphi consensus methodology. Assuming an 80% response rate and loss of adherence through three rounds, recruitment was maintained open until at least 50 providers agreed to participate.
Two outcomes were assessed: dosing principles and error thresholds. Dosing principles were elicited by literature review and small local focus groups, and then measured by the expert panel with the strength of agreement and disagreement on a Likert scale. Error thresholds were defined as the proportion above and below reference range that expert clinicians agreed was a dosing error, given the specific drug class and clinical context.
Survey Creation - Item generation & reduction
The survey was generated using a guide for self-administered surveys of clinicians (21). Survey principles were generated through literature review, and expert opinion in small local focus group sessions. Two small multidisciplinary focus groups were held with target clinicians (n=9, including physicians, pharmacists and nurses) in order to brainstorm factors that may affect dosing threshold. Groups were continued until no new items were generated, and dosing threshold cut-offs were established. The rationale for the medication classes tested in the survey were based on the medications with the most frequent dosing errors in pediatrics; anti-infectives, analgesic and sedative agents, and electrolytes (1, 6, 7). Item reduction was achieved by eliminating superfluous questions to minimize respondent burden, while retaining items that addressed a variety of medications classes, adjustment for organ dysfunction and pharmacokinetic properties. Further items were removed and/or altered after pre-testing the survey amongst clinicians. Error thresholds were established with focus group consensus and translated to a Likert scale (See Supplement Figure 1). Three of the nine clinicians from the focus groups went on to participate in the Delphi.
The survey was divided into three sections; demographic description of respondents, dosing principles and error thresholds. Dosing principles included drug, patient and clinical factors and were scored on a 5-point Likert scale from “Yes, definitely an error” to “No, definitely not an error” (See Supplement Table 2). The dosing error thresholds were on a 4- or 5- point Likert scale. There were 4 thresholds categories for under-dosing (1-10%, 11-20%, 21-50%, 51-100%) and 5 for overdosing (1-10%, 11-20%, 21-50%, 51-100%, >100%). The option for “I don’t know” was always available to acknowledge uncertainty, as was the option “Depends” to identify new issues or principles that may suggest an error in one type of clinical scenario, but not another. An example of an error threshold round 2 question is included in Figure 1 in the Supplement.
Pre-testing of the survey was conducted with the help of clinicians in 2 different PICUs to aid in refining questionnaire content and the format of answers. Pilot testing, conducted in 2 centers, evaluated clinical sensibility including: face validity; content validity, whether questionnaire was measuring what it was intended to measure, clarity and comprehension.
Expert clinician participants were recruited via e-mail. Upon agreement to participate, an electronic survey or paper survey was sent, according to respondent preference and the desire to preserve anonymity. All surveys were anonymous and consent was implicit upon survey completion. Clinicians who responded but were ineligible, were not included in analysis or further rounds. Participants received 3 reminders to complete each round. In rounds 2 and 3 of the survey, questions where consensus was achieved were removed, and aggregated responses of participants were included for questions where consensus was not achieved.
Data were summarized by numbers and proportions, and reported according to proposed methodological criteria for Delphi (22). An a priori definition of consensus was established. Consensus was achieved when >= 70 % of respondents agreed on an answer in Round 1, and >= 60% of respondents agreed to an answer in Round 2 and Round 3 (22). Disagreement was defined as 35% or more of responses falling in both of the 2 extreme ranges of options on the Likert scale. All other combinations of panel answers were considered as ‘partial agreement’. Blank responses were entered as blank, and proportions were adjusted accordingly. Data from surveys were imported and analyzed with Excel version 16.2. Research Ethics Board Approval was obtained from the Hospital for Sick Children (# 1000062863) and the University of Toronto (#00037969).