Underpinning theory and considerations for the process evaluation
Complex interventions are now presented as interactions of theory, context and implementation, rather than a set of mechanisms of change across multiple domains [32]. Our process evaluation, in line with MRC guidance [33], is broadly informed by the principles of realist evaluation [34,35]. Realist evaluation is an evaluation framework that attends to what it is about an intervention that works, for whom it works, and in what circumstances. It examines phenomena in relation to how context, mechanism and outcome are configured as patterns [34]. All Interventions are underpinned by theoretical assumptions about change and this may involve more than one theory, especially in complex interventions. We held a workshop to discuss and agree conceptual models, creating a multi-dimensional framework that aims to make the different elements visible in order to maximise the field of vision in terms of breadth and depth, in turn enabling the fuller engagement with the complex systems in which complex interventions are undertaken [36]. The conceptual framework for the TANDEM process evaluation can be seen in Figure 2.
[Figure 2 about here]
Theory informs the TANDEM process evaluation in three main ways:
- Realist evaluation principles of context, mechanism, outcome underpin the overall approach to our process evaluation design (understanding how the intervention works)
- Programme theory using psychological theory is set out as a logic model linking causal mechanisms (theory of change)
- Normalization process theory (NPT) will be used to consider implementation
In addition the process evaluation, similarly to the TANDEM intervention and study, is underpinned by a strong patient centred ethos [37].
An intervention can be considered as an event in a dynamic complex system [38], where viewing the intervention dynamically affects how the reach and effectiveness can be improved. There has been an emphasis on engaging from the early stages of study development with those who would receive the intervention (patients and carers), and those who would deliver it (health care professionals) in order to optimise participant engagement with the study tools and the relevance of the intervention to real-world implementation. Patient and public involvement in TANDEM, including the process evaluation, can be seen in Figure 3. The trial participants (patients, carers, health care professionals) are treated as active agents, who interact with the intervention mechanisms within a specified set of system conditions and shape the systems in which they are nested.
[Figure 3 about here]
When considering outcomes we aim to understand how participants interact with the intervention mechanisms so that change is generated. For example, people living with COPD often experience multimorbidity and have complex lives that affect their experience of care [39,40], meaning that it can be difficult to isolate the impact of the COPD. It is also not possible to separate the facilitators’ experiences of delivering the intervention from the contexts in which they work including their professional identity. We need therefore to describe contexts, how contexts influence system change, and how the intervention modifies the context. Potentially salient contextual influences on effectiveness and implementation that we have identified for the TANDEM trial are: facilitators’ professional practice; facilitator training; patient-facilitator relationship; facilitator supervision; patient (and carer where relevant) complex lives, policy-level priorities and organisational stakeholders’ perspectives on the delivery context. These will be explored using qualitative methods which are described below.
Explication of conceptual frameworks is key to guiding the post-trial implementation process [41], however, although formal and informal theory in implementation studies is viewed as important it is often poorly described and under-recognised [42]. NPT challenges researchers to consider how their intervention might become embedded or ‘normalized’ (or not) into routine practice [41,43]. In NPT, the work involved in implementation is investigated by focusing on: how people make sense of the intervention (coherence); how people participate in the intervention (cognitive participation); how they act collectively (collective action); and how they reflect and monitor what is happening (reflexive monitoring) [44].
Aims and objectives
The overall aim of our process evaluation is to describe and understand the processes by which the trial is conducted (specifically including fidelity and acceptability to recipients and professionals) and to consider the effect of these on the outcomes of the study. The process evaluation will also inform the implementation of the TANDEM intervention if the trial outcome is positive, or assist in the interpretation of findings if it is negative. The objectives address: acceptability, fidelity and implementation.
Specific objectives are:
1. To assess the acceptability of the intervention to patients and carers, including consideration of: content (in session, home practice); therapeutic alliance; and practicalities (location, timing).
2. To assess the acceptability of the intervention to facilitators and supervisors delivering the intervention.
a. Patient-facing CBA sessions including content, structure, logistics, telephone support and integration of components.
b. Facilitator training including content, logistics, supervision, perceived confidence to deliver the CBA sessions.
c. Management of workload.
d. upervisors’ training and workload.
3. To monitor the delivery of the intervention through assessment of fidelity.
a. Was the facilitator training delivered as intended with respect to professional competence?
b. Were the CBA sessions delivered as intended with respect to adherence and competency?
4. To consider the feasibility of implementing the intervention with respect to:
a. The recruitment, training and retention of facilitators and organisation of clinical supervision.
b. Rates of completed delivery of at least two CBA sessions (pre-determined estimate of minimal clinically important “dose” of intervention) per patient.
c. Numbers of patients seen by facilitators and numbers of sessions delivered to patients and reasons for intervention non-attendance/ no delivery of sessions.
d. Intervention drop-out or disruption to delivery and reason for drop out or disruption.
5. To explore the experiences and perspectives of patients, carers, facilitators, and supervisors regarding the intervention and post-trial implementation.
a. What are patients, carers, facilitators and supervisors’ experiences of the intervention and what are their views about its potential impact on health and quality of care?
b. What are the barriers and facilitators to implementation and how do these vary according to context and/or other factors?
c. Were there any unexpected consequences?
6. To explore the views of organisational stakeholders regarding post-trial implementation of the intervention.
a. What are the barriers and facilitators to implementation and how do these vary according to context and/or other factors?
b. What resources and partnerships are necessary for implementation?
c. To understand whether adaptations to the intervention are necessary depending on the clinical context in which it takes place e.g. if the intervention is delivered through primary care, secondary care or solely via PR services.
Research design and methods
This is a mixed-methods study using quantitative and qualitative methods. We are collecting quantitative and qualitative data from the intervention arm of the trial (and quantitative data from the control arm). The process evaluation lead (MK) is not involved in the design or delivery of the trial itself. RS, AB, KM and VR are involved in trial recruitment and data collection. The different types of data collected are described below and are separated for practical purposes but will be integrated and drawn upon in complementary ways in order to address the process evaluation aims and objectives. For example, qualitative data from patient and facilitator interviews may be used to triangulate with findings from the fidelity analysis.
Quantitative process data
Quantitative data are being collected throughout the multi-level intervention in order to understand whether delivering the intervention was feasible, assess the workload required for delivery, and how this may have varied from the trial protocol. Structured data will be presented descriptively and will be collected via:
- Facilitator recruitment, training attendance and training completion rates.
- Facilitator retention and caseloads of facilitators.
- Data logs on uptake, attendance, cancelled/re-scheduled appointments and completion of CBA sessions by patients.
- CBA content logs at the end of each face-to face patient session and analysis of audio-recordings of sessions.
- Subsequent attendance at PR (including number of PR sessions attended)
- Telephone support sessions by facilitator.
Qualitative data collection
Qualitative research can explore complex phenomena and areas less amenable to quantitative research [45], and aid understanding of the processes involved in both intervention and evaluation. Central to our process evaluation is collecting qualitative interview data that will allow us to describe, review and interpret the trial processes, including mechanisms and contexts, by gaining in-depth understanding of the perspectives of research participants. The qualitative data are being collected via three sub-studies: experiences and views of patients and carers on receiving the intervention (RS, AB, KM); experiences and views of facilitators and supervisors on training and delivering the CBA sessions (SN, AM); and views of organisational stakeholders (clinical commissioners, GPs, PR specialists, nurses, psychologists) on the implementation context (VW, VR). To reduce the risk of bias, interviewers are not interviewing participants they have recruited to the study. Details of the qualitative data collection and outline topic guides can be seen in Table 1.
[Table 1 about here]
Interviews are being undertaken by telephone or in person depending upon the preference of the research participant and the COVID-19 pandemic restrictions. Topic guides for the interviews with patients, carers, facilitators and supervisors are based on those used in the developmental and pilot phases. The topic guide for the organisational stakeholders draws on issues arising in discussions within the trial team. With a view to gaining an understanding of issues for implementation, topic guides have also been informed by the key NPT constructs of coherence, cognitive participation, collective action, and reflexive monitoring [44]. Through this user-centred approach we expect to gain insight into contextual influences and the facilitators and barriers to post-trial implementation. The NPT concepts will support our analysis and help to identify implementation challenges and areas where further evaluation is needed. The topic guides provide structure and aim to elicit participant experiences and perspectives in the context of their lives and work roles. Open questions are used to explore issues in the terms of participants, and to allow for unexpected issues to emerge.
Sampling frames have been drawn up based on a purposive sampling approach aiming for maximum variation [46] to gain a full range of views. Sampling is being reviewed during data collection and data analysis so that if unexpected themes emerge additional perspectives can be sought. If saturation is reached data collection will stop [47].
Data analysis
Quantitative process data will be analysed and presented using simple descriptive statistics (e.g counts and proportions). Qualitative data will initially be analysed thematically using an inductive approach and constant comparison [48]. Group discussion in research teams improves the rigour and quality of qualitative research [49], so our analysis will be a reflexive, iterative process involving review and multidisciplinary discussion. NVivo 12 software will be used to assist in the organisation and analysis of the data. A thematic narrative will be constructed for each sub-study.
Our approach is to undertake theoretically informative research in addition to theoretically informed research [50]. Further analysis will therefore be undertaken, with second level themes identified and explored in the light of relevant theory and research literature [50,51]. Data will also be interpreted drawing upon NPT resources such as the NPT toolkit [52] to assist interpretation of the data regarding post-trial implementation.
Fidelity assessment
Assessment of intervention fidelity follows the American Behaviour Change Consortium framework [53] and aims to find out whether the intervention was delivered as intended and the quantity (dose) of intervention implemented. Should the trial outcome be negative fidelity assessment makes it possible to understand whether it is the intervention that is ineffective or if delivery failed [30]. Complex interventions usually undergo some tailoring/adaptation when delivered in different contexts. Capturing what is delivered in practice, with reference to the theory of the intervention can enable evaluators to understand the core elements and where more flexibility may be allowed. Analysis focuses on what was delivered and how it was delivered, including training and support, communication and logistics, and how these structures interact with the implementer facilitators’ attitudes and circumstances to shape the intervention.
Fidelity will be assessed at two levels:
- The delivery of the three-day facilitator training was recorded. Preliminary assessment of facilitators’ skills was made on days two and three of the training where possible using the IAPT low intensity practitioner assessment manual (based on Blackburn’s Cognitive Therapy Scale [54] by at least two trainers.
- Audio-recordings of all intervention sessions will be made. A random 25% sample of recorded sessions across all 25 CBA interventions, and a smaller sample of 10% of CBA entire interventions will be coded by LS and VR with respect to facilitator adherence to the manual and competence in trained skills. The fidelity assessment framework is reported in detail by Steed et al [55].
Integrating results of analysis
The different evaluation elements will be reported separately and then integrated before the trial results are known. The main trial findings will be analysed independently of the process evaluation findings. Once both analyses are complete the analyses will be combined.
We aim to produce a high-quality, integrated evaluation of the trial processes informed by a clear conceptual framework. We will ensure rigour across our analyses by being transparent, maintaining a clear account of the procedures used in an audit trail. We are addressing validity by providing evidence to support our interpretations and providing context. We are also undertaking a comprehensive analysis of each data set. Workshops with patients and carers will be set up, depending on their preference and convenience, to consider interpretation of the trial findings in the light of the process evaluation analysis. Workshops will also be held with facilitators to discuss findings from the process evaluation and explore issues around implementation, including CBA training and engaging patients.