The evidence available to inform many routine process decisions in randomised trials is thin or weak. This includes the evidence on how best to recruit participants1, retain them2, collect their data3 or include them in decisions about the trial4. While evidence gaps in, say, the clinical management of diabetes might be expected to lead to a sustained and substantial research effort to fill them, similar effort has not materialised for trial methods research. Recruitment remains a major concern5,6 despite more than 25,000 new trials opening every year and needing to recruit participants7. Once recruited, there is also little evidence available to inform decisions about how to encourage trial participants to remain in the trial and, for example to attend face-to-face measurement visits, which are a vital part of most trials2. Further, there is almost no evidence base to inform trial management decisions, including how to select sites, whether visiting them in person is worth it, or how to train staff8.
The lack of trial process evidence contributes to research waste – for example through poor recruitment, retention and data quality – and has been a feature of medical research for decades9, with some suggesting that up to 85% of medical research spending is wasted10. However, much of the waste is avoidable11 and research funders recognise the need to avoid it12.
Trial Forge (http://www.trialforge.org) is an initiative that aims to improve the efficiency of trials, particularly by filling gaps in trial process evidence13. One way of improving the evidence base for trial process decisions is to do a Study Within A Trial (SWAT)14, which is a ‘...self-contained research study that has been embedded within a host trial with the aim of evaluating or exploring alternative ways of delivering or organising a particular trial process’ 15. For example, a SWAT could evaluate a new way of presenting information to potential participants as a way of improving trial retention, perhaps by being clearer about what taking part in the trial entails. Half of potential participants could be randomised to receive the new information while the other half receive the standard information. The effect of the new information on trial retention could be measured at the end of the trial, or possibly part-way through if the trial has a long duration. Other interventions that could be evaluated in a SWAT include remote site training compared to face-to-face training, sending participants thank-you letters after attending trial visits and sending birthday cards to children in paediatric trials to improve retention. Any improvements that will arise from using an alternative approach for a particular process are likely to be modest but the combined effect of small improvements across many processes may well be substantial.
There is a growing repository of protocols for SWATs (http://bit.ly/20ZqazA) and Madurasinghe and colleagues have developed a reporting standard for recruitment SWATs, which are a priority for trial methodology research16-18. Moreover, major funders are taking the need for SWATs seriously as a vehicle for more efficient use of public resources. For example, the UK’s National Institute for Health Research Health Technology Assessment program (NIHR HTA) now highlights SWAT funding in all its trial funding calls and was the topic of a recent ‘HTA Director’s Message’ (https://www.youtube.com/watch?v=PoIE6xxK-pA). The Health Research Board Trial Methodology Research Network (HRB-TMRN) in Ireland also funds SWATs19 and the Health Research Board encourages investigators to include a SWAT when applying for funding for both feasibility and definitive trial funding20.
An important question to ask when thinking about undertaking SWATs is how to prioritise interventions for their first evaluation in a SWAT. A good example of a prioritisation process for unanswered questions for trial recruitment is the PRioRiTY project18 (https://priorityresearch.ie). PRioRiTY 2 does the same for trial retention21.
The scope of the work described here is what happens after the first evaluation. When evidence is available for an intervention or some aspect of the trial process, how should one decide if further evaluation is needed in another SWAT? Deciding whether a particular intervention needs further evaluation will always be a judgement. The objective of this Trial Forge guidance is to provide a framework for making this an informed judgement based on explicit criteria that most trialists and methodologists can agree with. We take a pragmatic stance about evidence generation: trial teams need enough evidence to know whether something is worth doing, no more and no less. The aim is to avoid wasting research effort evaluating interventions for which there is already good enough evidence for decision-making, allowing attention to re-focus on those interventions where important uncertainty still exists. This paper presents criteria for how to do this for SWATs that use randomised allocation to compare different interventions.
The guidance is written from the perspective of whether a single research team should do a further single evaluation of a SWAT in a single host trial as this is currently the most likely approach to doing a SWAT. Although we take a single SWAT perspective in this guidance, we expect it to apply equally well to SWATs done as part of a coordinated package of evaluations.
Proposed criteria for making informed judgements about further SWAT evaluation
The main users of SWAT results will be members of trial teams. Funders of SWATs and trials are also likely to be interested. To make informed judgements, these users need to know what the accumulating evidence is for the effect of the SWAT on one or more relevant trial process outcomes (e.g. recruitment, retention), as well as the certainty for that evidence. They will want to know whether the evidence comes from evaluations done in contexts similar to their own. Finally, they will want to know how finely balanced the advantages and disadvantages of using the SWAT are, both for trial participants and the host trial.
Given the above, the five criteria we propose for deciding whether a further SWAT evaluation is needed are listed in Box 1. The aim of applying these criteria is to ensure that the need for a new evaluation is considered explicitly in light of what is already known about the intervention. Generally speaking, the more criteria that are met, the more likely we are to conclude that a new evaluation in a SWAT is appropriate. Conversely, if none of the criteria are met it is unlikely that a new evaluation would be appropriate.
Box 1: Should we do a further evaluation of the intervention in a SWAT?
The five proposed criteria for deciding whether the intervention needs another evaluation in a SWAT. The more criteria that are met, the more likely we are to conclude that further evaluation in a SWAT is appropriate.
- GRADE: the GRADE22 certainty in the evidence for all key outcomes is lower than ‘high’.i
- Cumulated evidence: the cumulative meta-analysis shows that the effect estimate for each outcome essential to make an informed decision has not converged.ii, iii
- Context: the range of host trial contexts evaluated to date does not translate easily to the context of the proposed SWATiv. For the proposed SWAT consider PICOT23:
- P – is the population in the host trial so different from those already included that the current evidence does not provide sufficient certainty?
- I – are the health interventions in the host trial so different from those already included that the current evidence does not provide sufficient certainty?
- C – is the comparator in the host trial so different from those already included that the current evidence does not provide sufficient certainty?
- O – is the SWAT outcome(s) so different to those used in the existing evaluations that that the current evidence does not provide sufficient certainty?
- T – in the time since the existing evaluations were done, have regulatory, technological or societal changes made those evaluations less relevant?
- Balance– participants: the balance of benefit and disadvantage to participants in the host trial and/or the SWAT is not clearv.
- Balance– host trial: the balance of benefit and disadvantage to the new host trial is not clearvi.
- A GRADE assessment of ‘high’ means that we are confident that the true effect lies close to the estimate of effect coming from the cumulative meta-analysis24. In Cochrane’s deliberations as to when to close a Cochrane Review (https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.ED000107/full), the collaboration chose not to require ‘high’ GRADE certainty in the evidence because it was felt that this may not always be achievable. Although we recognise the pragmatic nature of this, we recommend ‘high‘ in our criteria because SWATs are usually simple studies for which it should be possible to generate high certainty evidence. We will, however, keep this criterion under review to consider whether it needs relaxing.
- This is a judgement that depends on the behaviour of the effect estimates and on whether the confidence intervals include the threshold for an important benefit (or disadvantage). For example, if there is drift in the effect estimates of a meta-analyses but the confidence intervals around the estimates are consistently above what you think is an important benefit (or below a relevant disadvantage) then the cumulative meta-analysis can be judged to have converged despite movement in the effect estimates. For more on GRADE see http://www.gradeworkinggroup.org.
- A cumulative meta-analysis requires the same outcomes to have been measured in the same way in the studies to be combined. Most SWAT protocols specify just one or perhaps two outcomes, which reduces the scope for different outcomes between evaluations. Tighter specification of outcomes on SWAT protocols would help even more (e.g. retention sounds simple but could mean the proportion of participants who remain in the trial, the proportion who return a form, or the proportion who fully complete all forms). Core outcome sets for trial processes may help and this is being done in ELICIT for interventions to improve informed consent24.
- This is to provide reassurance about the applicability of the result to different types of trials. Care is needed to avoid a default position of insisting on an evaluation in every conceivable context. In other words, is there any reason to believe that the intervention would not work in your context given the contexts already studied? It is possible that evidence from SWATs will eventually splinter off to focus specifically on certain contexts but, for now, we suggest pooling evaluations of the same intervention because there are so few SWAT evaluations of any intervention and this pooling will provide a basic foundation on which to build.
- Where there may be no conceivable benefit or disadvantage for participants, they should be considered as balanced.
- A benefit might be that the host trial recruits faster, or its data quality is improved. Examples of disadvantages might be that there are added costs to the host trial, or that a new task is introduced into the workload of trial managers.
To illustrate the use of these criteria, we have applied them to examples from the Cochrane Review on strategies to improve trial recruitment1 and the Cochrane Review on strategies to improve trial retention2.
Example 1: Telephoning non-responders to trial invitations
Only two interventions in the 2018 version of the Cochrane Review for trial recruitment1 have both high certainty for the evidence and a potential for widespread applicability. One of these is telephoning people who do not respond to postal invitations to take part in a trial, which is used in this example. (The other relates to optimising the patient information leaflet.) The Cochrane Review notes that the rating of high certainty is only for trials with low underlying recruitment of less than 10% of eligible participants. If the evidence is to be applied to trials with higher underlying recruitment, the review authors suggested that the GRADE rating be reduced from high to moderate because of indirectness.
A trial team that includes people with lived experience of the illness or condition targeted is likely to consider information about the following essential when deciding whether a further evaluation of telephone reminders should form part of their recruitment strategy:
i) effect on recruitment
iii) participant irritation at receiving the telephone call
Applying the five criteria
Table 1 summarises the results of the two telephone reminder trials and the overall estimate of effect.
Applying the criteria in Box 1:
- GRADE. Data are available for recruitment only (2 trials, n=1450). The GRADE certainty in the evidence for the two trials in the review is high but is considered moderate for trials that do not have low (<10%) underlying recruitment. Criterion partially met (the GRADE certainty in the evidence for all essential outcomes is lower than ‘high’).
- Cumulative evidence. Data are available for recruitment only. There are only two trials and it seems too early to claim the cumulative meta-analysis has converged. Criterion met (the effect estimate for each essential outcome has not converged).
- Context. The PICOT for the available evidence is:
- P – One study was done in Norway in 2002/3 and involved people aged 16 to 66 who were sick-listed for more than seven weeks due to non-severe psychological problems or musculoskeletal pain. The second study was done in Canada in 2010 and involved people aged 50 to 70 years from family practice lists who were eligible for colorectal cancer screening.
- I – The host trial intervention in the Norwegian study was solution-focused sessions led by psychologists that were one-on-one or in groups and aimed to help people get back to work. The host trial interventions in the Canadian study were one of virtual colonoscopy, optical colonoscopy or fecal occult blood testing.
- C - The host trial comparator in the Norwegian study was usual care: written information from the social security office. The Canadian host trial was doing a head-to-head evaluation of three screening methods so the three interventions mentioned above were also the comparators.
- O – Both studies measured recruitment to the host trial. Both host trials had low underlying recruitment.
- T – Mobile telephones have replaced home-based phones for many people and neither study explicitly includes mobile telephones.
Considering the above, leads to Criterion partially met (a new evaluation is likely to contain several elements in the PICOT that are importantly different to those in the two existing evaluations).
- Balance– participants. There is little or no direct benefit to participants, although some may like being reminded about the trial. One potential disadvantage is that some participants may be irritated by the reminder call but what proportion would be irritated is unclear. Criterion met (the balance of benefit and disadvantage to participants in the new host trial and/or SWAT is not clear)
- Balance– host trial. The benefit to the host trial is a small increase in recruitment if underlying recruitment is low but it is unclear what the benefit would be if underlying recruitment was higher. There is a potential disadvantage to the host trial of over-burdening trial staff with making the reminder telephone calls but the size of this disadvantage is unclear. Criterion met (the balance of benefit and disadvantage to those running the host trial is not clear)
Considering the responses across all five criteria leads us to conclude that further evaluation of telephone reminders is needed and especially where underlying recruitment is anticipated to be higher than 10%. The views of people with lived experience of the conditions targeted by host trials on receiving telephone reminder calls should be sought in future evaluations. More information on cost and the potential disadvantages for the host trial would also be welcome, as would evaluations that used mobile telephones.
Figure 1 shows how the evidence with regard to telephone reminders for recruitment might be shown on the Trial Forge website. The cumulative meta-analysis in this summary shows four decision thresholds (absolute difference of 0%, 5%, 10% and 15%) that trialists can use when deciding whether they want to use the intervention in their own trial based on the current evidence. A trialist looking for a 10% or better increase in recruitment would probably decide that telephone reminders are not worth the effort, especially if underlying recruitment is not expected to be low. While a trialist expecting very low underlying recruitment might decide that any increase, even a small one, is worth having and plan their resource use accordingly. In both circumstances, the trialists would need to speculate on the balance of benefit to disadvantage.
Example 2: monetary incentives to increase response rates to trial questionnaires
The 2013 Cochrane Review of interventions to improve trial retention2 found that monetary incentives seem to improve response rates to trial questionnaires. A trial team that includes people with lived experience of the illness or condition targeted is likely to consider information about the following essential when deciding whether a further evaluation of financial incentives should form part of their retention strategy:
i) effect on questionnaire response rate (retention)
iii) participant irritation at receiving a small, unsolicited gift
Applying the five criteria
Table 2 summarises the results of the three monetary incentives trials and the overall estimate of effect.
Applying the criteria in Box 1:
- GRADE. Data are available for questionnaire response rates only (3 trials, n=3166). The overall GRADE certainty in the evidence is moderate. Criterion met (the GRADE certainty in the evidence for all essential outcomes is lower than ‘high’).
- Cumulative evidence. Data are available for questionnaire response rates only. There are only three trials and it seems too early to claim that the cumulative meta-analysis has converged. Criterion met (the effect estimate for each essential outcome has not converged).
- Context. The PICOT for the available evidence is:
- P – Two trials were done in the UK, one in 2002/3 and the other in 2007/8. The first involved women who had had a baby. The second UK study involved people over the age of 18 who attended emergency departments with a whiplash injury of less than six weeks duration. A third trial was done in the US in 2001 and involved smokers who wanted to stop.
- I – The host trial intervention in the 2002/3 UK study was an antibiotic, while in the 2007/8 UK study the host trial intervention was a book of advice about whiplash, with that advice being reinforced depending on the persistence of symptoms. The host trial intervention in the US study was a community-based program of public education, advice from health care providers, work-site initiatives and smoking cessation resources.
- C - The host trial comparator in the 2002/3 UK study was placebo and usual whiplash advice in the 2007/8 UK study. The host trial comparator in the 2001 study was no community-based smoking cessation program.
- O – All studies measured retention to the host trial. All three host trials had underlying retention below 50%.
- T – The most recent of these studies was done in 2007/8 so inflation and other societal changes may affect the attractiveness of the amounts paid.
Considering the above, leads to Criterion partially met (a new evaluation is likely to contain several elements in the PICOT that are importantly different to those in the three existing evaluations).
- Balance– participants. There is modest financial benefit to participants who receive the incentive. The potential disadvantage of a participant feeling coerced to provide questionnaire data seems low given the size of financial incentive being offered in these trials (US$10 or less) although whether these small amounts are perceived as insulting or irritating is unclear. Criterion partially met (the balance of benefit and disadvantage to participants in the new host trial and/or SWAT is not clear).
- Balance– host trial. The benefit to the host trial is a modest increase in response rates. The potential disadvantage to the host trial of the costs of providing the incentives is quantifiable. Workload may be increased (e.g. someone has to manage vouchers or other incentives) but this is unlikely to be much larger than the work needed anyway to send out questionnaires. Criterion not met (the balance of benefit and disadvantage to those running the host trial is clear and can be estimated for each trial depending on the size of the incentive).
Considering the responses across all five criteria leads us to conclude that further evaluation of financial incentives is needed with priority given to evaluation in trials expected to have underlying retention above 50%. The views of people with lived experience of the conditions targeted by host trials on receiving small, unsolicited payments should be sought in future evaluations. Future randomised evaluations should ensure that they are assessed as at low risk of bias on the Cochrane Risk of Bias tool25 to move the GRADE assessment from moderate to high.
Figure 2 shows how Trial Forge might summarise the evidence with regard to monetary incentives for retention.