Reporting and design of the randomised trial are based on the Consolidated Standards of Reporting Trials (CONSORT) guidelines.[16]
Participants
The research was conducted within the context of the Medicine in Australia: Balancing Employment and Life (MABEL) survey. This was a longitudinal panel survey of all medical practitioners in Australia, collecting 11 annual waves of data from around 9,000 to 10,000 physicians per wave.[17] The original responders in Wave 1 (2008) were followed up annually, with the addition of a cohort of new doctors entering the sample frame from Wave 2 and each subsequent wave.[18] Each wave therefore had a mixture of doctors from different cohorts. Responses for each wave were gathered using a sequential mixed mode design based on an earlier RCT.[10] The MABEL survey is sent to all types of medical practitioners in Australia.
The sample frame for MABEL is the Medical Directory of Australia, a national database of doctors held by the Australasian Medical Publishing Company (AMPCo). We use participants from Wave 11, administered between August 2018 and April 2019. Doctors were excluded if they had previously requested to withdraw from the MABEL survey, or who were known to be deceased. Junior doctors were excluded since in 2016 we conducted a small experiment that supported the use of an email approach for junior doctors and this was adopted in subsequent waves for this group only[17].
Intervention.
The mailout for the intervention groups included an email approach for the second reminder. Both the intervention and control group were approached four times: the initial invitation plus three reminders. In the control group all four approaches used a mailed paper letter sent to each participant’s work address. All survey materials are available at www.mabel.org.au. The initial invitation included a mailed letter that contained log in details for online completion, as well as a paper copy of the survey. Respondents could choose the mode of completion. The first reminder used a mailed paper letter containing instructions for online completion but no paper survey. The second reminder used a mailed letter with instructions for online completion and included a paper copy of the survey. The third reminder included only a mailed paper letter with instructions for online completion.
The intervention group only differed at the second reminder, where they were approached by email and could only complete online, receiving no paper letter or paper copy of the survey. The email included the same text as the paper letter plus a link and instructions for online completion. Emails were sent to email addresses from the AMPCo database or from email addresses provided by participants in earlier waves of the survey.
Outcomes
The primary outcome for this study is the response rate. A medical practitioner was considered to have responded if they returned their survey (via mail or completing online) with Section A completed which included questions on whether they were currently participating in clinical practice and at least one question answered from Section B. Current working status was a key outcome in some research questions addressed by the MABEL survey. Surveys returned blank were counted as refusals to participate. The response rate was calculated as the total number of responses divided by the total number of surveys distributed (minus surveys that could not be sent by the mailing house because the doctor was deceased or had no valid mailing address).
Sample size
The sample for the trial included 18,247 GPs and non-GP specialists eligible to be invited to complete a survey in Wave 11. This included, i) 13,382 doctors who had previously completed at least one MABEL survey since 2008 (defined as a continuing doctor), ii) a cohort of 1,862 doctors new to the sample frame in 2018, and iii) 3,003 doctors from a ‘boost’ sample comprising a 10% random sample of those who had never responded to an invitation to participate in MABEL. The total sample size of 18,247 doctors is sufficient to detect a two-sided difference of least two percentage points in the response rate (alpha 0.05 and power 0.8). This assumes a response rate of 42.4% in the control group (an estimate from Wave 10 of MABEL).
Randomisation
A parallel-arm design with 1:1 allocation was used, with 18,247 doctors randomly allocated to either the control or intervention group. Allocation was stratified by doctor type (GP, specialist), continuing or new, and boost sample to ensure the proportions of these groups of doctors in the intervention and control groups were the same. This is important because new doctors and boost sample doctors are likely to have lower response rates, and specialists had higher response rates than GPs in previous waves. We tested a two-sided hypothesis as it was unclear, a priori, whether the intervention group would have a higher or lower response rate. Randomisation was performed using the sample command in Stata 15.1 statistical software15.
Randomisation took place (by TT) before the first invitation for Wave 11 was mailed out in August 2018. Group allocation was kept separate (in a separate electronic data file) from the main mailout and the first reminder so researchers handling the responses and reminders were blinded to group allocation during this process. The second reminder was prepared in late November 2018 by TT. The list of those eligible for a second reminder was then merged with the file containing the intervention and control group identifiers to indicate who should receive an email. A separate file indicating whether doctors had an email address was also merged onto this file. AMPCo was sent a list of doctor identifiers indicating if they should be approached using a mailed letter or an email.
Statistical methods
The analysis was conducted by BH who was blinded to group allocation until after the analysis was complete and checked, and who was not involved in the randomisation or any data collection. Baseline characteristics of the intervention and control groups are compared with each other, and with the population of medical practitioners in Australia. The main analysis was based on intention to treat, which estimates the average treatment effect (ATE).
The proportions responding in each group were compared using a 2x2 table and Pearson chi-squared test. An adjusted analysis was also conducted using multivariable logistic regression to examine the probability of response between the two groups after adjusting for covariates. Sub-group analysis was also conducted for GPs and other specialists. Covariates included age, gender, whether qualified overseas, quartiles of the socio-economic status of patients in each respondent’s postcode (measured using the Socio-Economic Indexes For Areas (SEIFA) Index of Relative Socio-Economic Disadvantage[19]), and the proportion of the population in the postcode over 65 years old and under 5 years old. Finally, the rurality of the each GP’s work location was measured using the Modified Monash Model (MMM) classification.[20] Major cities (MMM1), areas within 20km of town with 50,000 population (MMM2); areas within 15km of town with 15,000 to 50,000 population (MMM3); areas within 10km of town with 5,000 to 15,000 population (MMM4); MMM5-7 (all other remote and rural areas) are grouped with MM4 for the analysis.
A proportion of doctors allocated to the intervention and control group will not have supplied a valid email address. In the intervention group those who did not supply an email address cannot adhere to their allocated group and were instead approached by mailed paper letter rather than email. The intention to treat analysis includes these in the intervention group. This is appropriate as it assumes that in practice not all doctors are willing to provide email addresses and so the results are applicable to the population of doctors whether or not they are willing to supply an email address.
However, this will lead to an underestimate of the effect of the intervention for those who actually received an email compared to those in the control group who also received email. In addition to calculating the results using intention to treat analysis, we therefore calculate the average treatment effect on the treated (ATET). This compares those who had a valid email address in both the intervention and control group.