The overall design of the ORBIT process evaluation is a mixed-methods study using purposively sampled qualitative data together with quantitative data from the trial. This will involve semi-structured interviews with children, parents, therapists and supervisors and clinicians as well as analyzing online feedback from participants together with data from the online platform, such as total therapist time, number of chapters viewed, and number of log-ins.
The schedule of the ORBIT process evaluation procedures is displayed in Figure 2. In Additional File 1 a populated Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist is provided [29] and in Additional File 2 a CONSORT-EHEALTH: Improving and Standardizing Evaluation Reports of Web-based and Mobile Health Interventions is provided [30].
Ethical approval for the process evaluation was obtained from North West - Greater Manchester Central Research Ethics Committeeas part of the ORBIT trial (REC: 18/NW/0079).
Qualitative data collection
Qualitative data will be collected by interviewing participants in the BIP TIC intervention (both CYP and parents, either separately or as a dyad), therapists and supervisors and referring clinicians. Interviews with therapists and supervisors involved in the ORBIT trial will be conducted early in the trial and near the end of recruitment in order to gain an understanding of their experience at different time points. All interviews will be conducted either by telephone or by the WebEx videoconferencing application. In addition, at the end of treatment participants are asked within the BIP TIC platform questions including what the most important thing they have learnt from treatment, how the treatment has helped, if the treatment caused any difficulties to participants, and any other comments they may wish to add. Once this data is input into BIP TIC it will be exported to an excel spreadsheet and subject to content analysis.
Sampling and recruitment for interviews
Children and parents
In line with previous literature [31,32], four semi-structured interview schedules were developed (see Additional File 3). The child and parent interview schedules were drafted and underwent revision from the main researcher and three academics. Questions include: (a) how they found out about the ORBIT trial; (b) why they took part; (c) their initial expectations; (d) their views of the content, structure, and the different chapters of the online program; (e) what impact the therapy had, if any, on their tics; (f) what they found most and least helpful; (g) barriers to participation; (h) how they felt about communicating with their therapist; (i) if they would alter anything about the program; (j) their recommendations for improvement of the interventions and their overall experience of participating in the trial.
The revised drafts were sent to two dyads of the Patient and Public Involvement (PPI) group—including two children with tics—for feedback and were revised accordingly.
All interviews will be carried out with CYP and parents of CYP following completion of the intervention at the three-month (primary end-point) follow-up assessment in the main trial. Recruitment for the interviews began in August 2018 through the following methods:
- Following completion of the primary end-point, the researcher conducting the follow-up assessment asks participants if they are willing to be contacted about taking part in an interview. If the participant agrees, the researcher informs the process evaluation researcher who makes contact with the family.
- Researchers at both QMC and GOSH arranges a convenient date, time, and method for interview to participants who agree following their primary end-point follow-up assessment.
- A proportion of the participants are to be contacted by telephone following their primary end-point assessment by the main researcher of the process evaluation.
Participants will only be contacted if they gave explicit written consent to participate in an interview for the ORBIT trial and, for a child under 16; assent was obtained with parental consent (see Additional File 4). Participants will be purposively sampled with the intention of collecting data from a diverse cohort to obtain varying views on the intervention. This will include ensuring perspectives from a range of ages; gender, ethnicity, and level of interaction with the intervention are voiced. We anticipate that this sampling strategy will result in sufficient heterogeneity to provide examples of both relatively poor and relatively good adoption, delivery and maintenance, and will allow us to identify barriers and facilitators to implementation and to generate hypotheses about factors that may be associated with differing outcomes for CYP in the intervention arm.
The target for participant interviews is CYP (n = >20) and parents of CYP (n = >20). This will ensure that data will reach a level of saturation [33] and to enable a diversity of views.
Therapists
The therapist interview schedules were drafted and underwent revision from the main researcher and three academics, as well as input from a therapist and clinical researcher with specific expertise in the field. Therapist questions include: (a) their role on the ORBIT trial; (b) how they found out about ORBIT and why they got involved; (c) what specific skills they felt a therapist needed for the program; (d) any training needs identified; (e) how they managed ORBIT around other commitments; (f) their experiences of receiving/giving supervision sessions; (g) if the therapy is being delivered as planned; (h) their experiences of interacting with participants; (i) their views on the two trial arms; (j) and their recommendations for future use.
Therapists will initially be interviewed individually early into the trial (halfway through the study) and then interviewed again near the end of the trial. This will allow for a range of experiences at different time-points to investigate trial progression. The target for therapist interviews is n = >5, of which two are supervisors.
Clinicians
Clinicians refer to any healthcare professional (usually a doctor) who were responsible for referring participants to the ORBIT trial. Whilst they were not explicitly involved in the ORBIT trial, the main purpose of interviewing them was to gain their views on potential implementation in routine care. The clinician interview schedules were drafted and underwent revision from the same team and were guided by normalization process theory (NPT) [34,35]. As the purpose of the clinician interviews were to explore their views about the feasibility of integrating the intervention into everyday practice, including any potential barriers to or facilitators of this, NPT framework approach seemed the most appropriate. The clinician interview schedule questions aim at eliciting information on how they got involved in the ORBIT trial and why, their experience of recruiting for the trial including factors that affected recruitment, and how the NHS could incorporate the intervention into everyday practice. Clinicians will be purposively selected from the PIC sites involved in recruiting for ORBIT and the target for clinician interviews is n>5.
Quantitative data collection
Online data will be collected and recorded from participants throughout the trial. This includes the following measures: total therapist time; therapist time specific to each therapist; therapist time specific to each child and parent; total number of characters submitted by child and parent (as part of communication messages via the online system); total number of logins for child and parent; average time between each login (in days) for child and parent; average pages visited per login for child and parent; and the five most frequently visited pages per child and parent. This data will be amalgamated and entered into a centralised online database whereby the main researcher will then extract this data for analysis as part of the process evaluation.
Trial data
As part of the quantitative measures for the process evaluation, we will also be extracting and analyzing change in YGTSS TTSS from baseline to primary end-point, which will be used to inform behaviour change. As mentioned, this is a key component of the MRC guidance on process evaluations. Demographic data, overall symptom improvement as measured on the Clinical Global Impressions Scale (CGI) improvement [36], depressive symptoms at baseline as measured on The Mood and Feelings Questionnaire (MFQ; Child completed version) [37], and service use data as measured by the modified Child and Adolescent Service Use Schedule (CA-SUS) [38] will also be subject to analysis. These data will be used to measure context and the mechanisms of change. The target sample size for all quantitative data is n = 110.
Table 1 presents a summary of the explanatory data sources that will be used to inform each component of the process evaluation.
Data analysis
Qualitative data will be exported and analyzed in QSR International’s NVivo 12 Software [39] and quantitative data will be exported and analyzed in SPSS (version 25.0) [40]. Process evaluation data will be analyzed autonomously of the main outcome data of the ORBIT trial.
Qualitative data analysis
All interviews will be recorded either by the WebEx videoconferencing application or by Dictaphone and then transcribed verbatim. Transcripts will be checked for accuracy against the recordings with any corrections made as appropriate. Prior to the transcripts being imported into QSR NVivo 12, any reference to places, clinicians, therapists, and/or family members that may reveal participants’ identity will be redacted, and all participants’ names will be anonymized. The interviewer will take notes during all interviews.
As the process evaluation is a combination of exploration and description, thematic analysis will be used to identify, analyze and report patterns within the transcribed interviews. Thematic analysis is widely used within the field of psychology and is considered the most flexible qualitative analytical process [41]. More broadly, the Framework Method [42] of analysis will be employed, as it is most commonly used for the thematic analysis of semi-structured interviews [43]. Moreover, Ritchie and Spencer [42] outline four types of research questions that they believe framework analysis can helpfully address: 1) Contextual - identifying the form and nature of what exists (e.g. what is the nature of people’s experience?); 2) Diagnostic - examining the reasons for, or causes of, what exists (e.g. why are services or programmes not being used?); 3) Evaluative - appraising the effectiveness of what exists (e.g. what affects the successful delivery of programmes or services?); and 4) Strategic - identifying new theories, policies, plans or actions (e.g. how can systems be improved?). As the process evaluation covers all of these questions, we feel this is the appropriate methodology to use.
Ritchie and Spencer [42] suggest five key stages of framework analysis: familiarization, identifying a thematic framework, indexing, charting, and mapping and interpretation. During the familiarization stage, the main researcher will immerse himself in the data by listening and/or watching back the interviews, reading transcriptions, and studying observational notes whilst listing key ideas and recurring themes. The data will then be analyzed to identify key issues, concepts, themes, and sub-themes drawing on both a priori and emergent issues. Next, the transcripts will be coded and indexed into framework categories by systematically applying the thematic framework to each interview. The indexed data will be summarized for each category and organised in chart form. This process will involve working through each framework category, summarizing all data that have been indexed to that category, and then providing a summary for each category, for each participant, using headings and subheadings. Consequently, key characteristics of the holistic data set will be mapped and interpreted. A subset of transcripts will be double coded by an independent coder to identify emergent patterns and themes relating to participants’, therapists and clinicians experiences of the ORBIT trial. Charted data will be annotated independently with discussions taking place on these findings, which will allow for a refinement and amendment of data in an iterative process. Once confidence in the congruity and meaningfulness of interpretation is established between researchers, we will review the remaining interviews to establish whether our understanding has reached acceptability.
The large amount of data collected for the process evaluation encouraged us to use computer assisted qualitative data analysis software (CAQDAS). One CAQDAS package, QSR NVivo 12, is fully integrated with framework analysis and this will be used to categorise data and document any themes and sub-themes. Online feedback given by participants at the end of therapy will be analyzed using content analysis and integrated into the aforementioned framework.
Quantitative data analysis
Quantitative data from the online platform will be subject to descriptive statistical analysis with total numbers and percentages and mean with standard deviation or median (range), if not normally distributed, being presented. This will provide information on intervention delivery, including the implementation of different components and fidelity. Independent samples chi-squared and t tests will be calculated to explore any significant differences within the intervention group. For data not normally distributed, non-parametric alternatives will be used (i.e. Kruskal–Wallis H and Mann–Whitney U tests), using a significance level of P<0.05.
Mixed methods analysis
Qualitative and quantitative data will be analyzed separately and then mixed during analysis in a methodological approach known as triangulation [44]. Both qualitative and quantitative data will be given equal importance, as both sets of data are central to addressing the research questions posited by the process evaluation. In Additional File 5 a Good Reporting of A Mixed Methods Study (GRAMMS) [45] checklist has been provided.
Coding of qualitative data and preliminary qualitative analysis will be conducted synchronously with the analysis of descriptive statistics of participants’ online data. Thus, the descriptive data will aid in the refinement and amendment of questions central to qualitative data collection. In other words, key themes may emerge from the quantitative data, which could then be further explored or clarified from qualitative data, and vice versa. The main researcher will integrate and compare outcomes from the various data sets guided by triangulation protocol. The aim of this is to create a matrix of converging data sets to assess outcomes where there is agreement, dissonance, and where themes or outcomes emerge in one dataset but not another. Once the matrix of outcome synthesis from the various datasets is finalized, it will be used to emphasise the mechanisms of impact, implementation fidelity, and, more broadly, explain the outcomes of the trial.
Integration of findings
The process evaluation data will be analyzed prior to knowing the main ORBIT trial results with the two analyses being independent of each other. The ORBIT trial team will be unaware of the findings of the process evaluation until the primary outcomes from the main trial have been analyzed. Once both trial and process evaluation analyses are complete, combined qualitative and quantitative data may aid in the development of hypotheses about the potential successful implementation in one context over another and how and why some components were delivered successfully and others were not. Furthermore, the analysis of different components may aid in the identification of causal mechanisms and how and why individual intervention components were more effective than others were. Following quantitative analysis of ORBIT trial data, qualitative data from the process evaluation can potentially be used to help explain outcomes of the trial. Additional analyses can then be conducted to test hypotheses emanating from integration of process evaluation data with trial outcomes, drawing together the findings to understand why the intervention worked (or not), context, and implications for further dissemination to improve provision of care for CYP with tics.