VEIWER: a Digital tool for visualising data in mental health records

Background Electronic Audit and Feedback (eA&F) interventions aim to improve patient care by presenting clinicians with summarised information of clinical performance over time. We created an eA&F intervention to assist clinicians in the management of psychosis in the community. Methods The eA&F intervention was created collaborative between informaticians, computer scientists and clinicians and summarises population-level data on service useage, medications, psychological interventions and physical health on an interactive platform. We evaluated its usability using the systems usability scale (SUS), outcome data on physical health monitoring (Body Mass Index and Blood Pressure) using difference-in-difference analysis and interviewed clinicians, interpreting the data using thematic analysis. Results The system showed good usability (72 on SUS), although did not lead to improved recording of physical health or improved patient outcomes during the course of the study. Thematic analysis of interview data showed high numbers of statements around systemic barriers, and significant unrealised potential gains. Our study attempt to an for Patient outcomes not improved the the however identified of They the . results of linear regression with BMI and Blood pressure as the outcome, with missing data excluded from the analysis. The data shows no significant differences between time, groups or exposure to the intervention for BP (p= 0.74, CI -10.253 7.282) or BMI (p=0.762, CI -3.59 2.63)

2 schizophrenia [2]. It has been known since the 1990s that patients with schizophrenia suffer premature mortality [3] much of which can be accounted for by physical ill-health -mainly cardiovascular disease, cancer and respiratory illness [4]. There is evidence that death from these illnesses is increasing, likely due to factors including lifestyle and social determinants of health as well as adverse effects of prescribed medications [5]. An important first step to reducing mortality is to identify those at risk of early mortality through monitoring of factors such as weight and Body Mass Index (BMI), cholesterol and blood sugar levels and blood pressure (BP) [6]. Current guidance is that this data should be collected annually [7] Audit and Feedback (A&F) interventions aim to provide 'a summary of clinical performance of healthcare over a specified time to provide healthcare professionals with data on performance' [8]. In practice these usually consist of graphical, written, or numerical information that describe a clinician or team's processes and outcomes against desirable practice. A&F can be used alone but is more often part of a wider quality improvement intervention [9]. A Cochrane review found A&F alone to be associated with modest improvements (4.3% increase in desired practice), although there was significant heterogeneity, and the authors suggest that potential improvements where A&F is used optimally are much larger [10]. Health data is increasingly available electronically, meaning that auditing the data, rendering and presenting for feedback can be performed electronically in electronic or e-A&F.
There are many theories about how A&F interventions work, with 18 theories being used in 20 of the studies identified by Cochrane [10]. A small minority of studies explicitly reference theory as part of their development process [11], although most studies implicitly target knowledge, motivation and goals to bring about improvement [12]. Three theories that have been more prominent in the A&F literature recently are control theory, goal-setting theory and feedback intervention theory [13].
Control Theory [14] proposes that behaviour is regulated by a negative feedback cycle in which clinicians compare a perceived present state to a reference value. The clinician then acts to reduce the gap between their perceived reality and the reference value. Goal-setting theory [15] describes achieving improvement in terms of motivation, and sees feedback as a means of directing effort and maintaining motivation.
Both theories see feedback as a way of directing effort and establishing appropriate goals. Such assumptions are implicitly applied in A&F [16] and are supported by Davis et al. [17] who found greater improvement in clinicians performing poorly on measures, suggesting that it help clinicians self-assess more realistically. However, these theories do not explore the complex interaction between feedback, individuals and systems to account for the variability in the effectiveness of A&F.
Feedback intervention theory (FIT) [18] is a well-established theoretical framework for A&F interventions which states that feedback can work by providing cues that prompt behaviour change and allow the development of learning strategies. It can also lower cognitive effort and lead to rapid improvement and improved self-efficacy. These factors feed into a motivational and learning response that leads to improvement. However, feedback that is received as criticism, increases anxiety, or does not lead to change can cause strong emotional 3 responses (meta-task processes). Feedback can also lead to increased cognitive demand. These factors can lead to rejection of the feedback or standard and prevent improvement (figure 1).

Figure 1 Feedback Intervention Theory (FIT; Dowding, Merill & Russell, 2018)
There has been much research into aspects of A&F which make it more effective. Hysong et al. [19] developed a theory of actionable feedback which stated that feedback should be timely (given regularly); individualised (relating to an area which can be influenced by the clinician); non-punitive and customisable (giving clinicians control of the feedback that they receive). Mitchell et al. [20] found that feedback which identified 'at-risk' groups was more effective. Feedback is more effective where clinicians are provided with information about how to improve their outcomes [21], and where clinicians are clear about relevance and credibility of the data [22] or are involved in setting the target [23]. Explicitly providing a comparison for reference has variable results, however Gude et al. [24] found that like-forlike comparisons were more effective than comparisons with a population mean, and that encouraging clinicians to set clear, explicit goals improved effectiveness.
This study outlines the creating of an audit and feedback intervention developed for use by CMHTs to help them better manage their population of patients with a psychosis diagnosis. The aim of the study is to evaluate the intervention, its impact on patient care and outcomes and use existing theories around A&F to describe the effect of A&F on clinicians' practise.

Setting
We used data from South London and Maudsley (SLaM) NHS Foundation Trust, a large mental health trust providing secondary and specialist inpatient and community mental health services to South East London 4 The trust benefits from using the clinical records information system (CRIS) [25] which extracts and organises information from the trust EHR from structured data and using natural language processing (NLP) apps. An application to use CRIS to build a prototype was submitted. Following approval we used the CRIS (de-identified) data to build a static prototype. This was tested for face-validity and consistency. Following this, approval was sought from the trust Information Governance committee to deploy the method (and code) on fully identifiable source EHR data for front-line clinical purposes. CogStack is an 'opensource information retrieval and extraction architecture' [26] which allows data aggregation, visualisation and management. Figure 2 shows the flow of information across the systems. The trust also covers a large urban population with a high prevalence of psychosis, increasing the sample size. Called VIEWER (Visualisation & Interaction With Electronic Records), the intervention consists of a dashboard that leverages both CRIS and Cogstack to present data to clinicians about their populations.
The dashboard was developed by clinicians and academic informaticians between December 2019 and December 2020. VIEWER consists of a data model (see Appendix 1), which was converted by the CRIS team into SQL queries of the CRIS database. This query is automatically run on a weekly basis with the time signature of the query run date noted. The data is then saved as a CSV file, ingested into Cogstack, where Kibana is used to aggregate and analyse the data and then presented back to clinicians via an interactive dashboard. The dashboard graphically depicts a variety of metrics around service use, diagnosis, medications and physical health (Figure showing this). Clinicians can also interact with the dashboard, which uses elasticsearch, a "distributed, RESTful search and analytics 5 engine" [27] to allow them to filter the total patient population by clicking on the visualisations. Importantly, clinicians can filter the information as they feel useful, meaning that they can choose to look at aggregated data about their own caseload, those of their team or another team, and can subset the data according to populations of interest and thus define 'at risk' groups who may benefit from intervention.
Our intention is that the VIEWER system be used in a variety of use-cases, including: -Early detection of non-response to psychosis treatment allowing earlier detection of non-compliance and of treatment-resistant schizophrenia -Identification of patients with psychosis who could benefit from an offer of psychological therapy -Identifying unmet physical health needs in patients with psychosis -Identifying variation and inequalities in treatment and outcomes of patients with psychosis. In this pilot study, we looked specifically at the use-case of physical health data.

Clinical Data
All patients sampled were under the care of SLAM Trust and had a diagnosis of a psychotic disorder (a disorder from F20 to F29 in the International Classification of Diseases (ICD) volume 10 [28]), recorded on the trust EHR. This represents a group of patients who have psychotic symptoms that are not secondary to other causes such as drug use. This group was chosen as these patients have well-established needs to address the mortality gap. 6 The sample represents the caseload of a chosen CMHT and a control CMHT. This represents a convenience sample, based on agreement from the trust leadership and agreement from the CMHT to be involved.
The CMHT consists of 15 clinical staff and cares for approximately 200 patients with psychosis. A control group was chosen by finding a community team matched with the team as far as possible by: -size of team -number of patients with psychosis -baseline completion of physical health data -previous trends in completion of physical health data.
All staff members who have been given access to VIEWER (including those who are not members of the control team but have been given access for other reasons) were invited to complete a survey on the usability of VIEWER. This includes all members of the intervention team and other staff members who had applied for access for other reasons.
To recruit staff for qualitative interviews, all staff at the intervention team were invited to a meeting where the study was explained. Those who attended were invited to individual interviews, regardless of whether they had used VIEWER.

Design
The study design is mixed methods. The quantitative component is designed to measure change in patient-level outcomes. Due to the non-random recruitment of teams, we used an observational study design in which the difference between quality indicators at the start and end of the study period were compared between an intervention and a control group using 'difference in difference' analysis.
To assess usability, we administered an electronic version of the System Usability Scale (SUS) (see appendix 3), to measure the usability of the system, with the opportunity for free-text feedback on facilitators and barriers.
To assess the effect that access to VIEWER had on clinician behaviour, we performed semistructured interviews (See Table 1) with clinicians from the intervention group which were coded and analysed using thematic analysis [29]. A largely deductive approach using the preexisting FIT was used, as theory around the use of feedback is comparatively well-established and the aim of the study was to make use of this theory to understand the team's response to the feedback rather than to develop new theory. Emerging themes were, however, explored regardless of whether they fit specifically within FIT. Prior to starting the intervention, the intervention team was shown the functionality of VIEWER -including their own data -and discussed the clinical metrics that were represented and could form the basis for improvements in clinical practice. They then chose the metrics that they would like to improve. Baseline data had been taken retrospectively from the team's data for the previous 3 months as part of VIEWER's development. The consultant psychiatrist from the control team was also informed about the dashboard and the study. As the metrics selected are part of the key performance indicators of quality care at the trustand should therefore be subject to continuous review and improvement -it was felt fair to measure this metric without giving the control team any further input.
The study period took place between 7th January 2021 and the 2nd July 2021, during which clinical staff at the intervention team were given access to VIEWER (see appendix 2) and an explanatory video. They were also given the opportunity to meet with the development team to provide technical support and support with devising a measurement plan.
DC met with the team regularly throughout the study period. All clinicians were invited and could ask any questions about VIEWER.
1 month into the study, a Systems Usability Scale (SUS) questionnaire [30] was administered electronically to all users.
Data for the study was collected at the start and end of the study period (7 th Jan and 2 nd July 2021 respectively), using the data pipeline underlying VIEWER.
Following the end of the study, all clinicians in the intervention team were invited to a final meeting where the qualitative part of the study was explained and were asked if they would agree to be interviewed. An email follow-up was then sent to each participant, and

Topic Guiding Question Possible Follow-ups Global
Did you use any aspects of the intervention? Why not? How did you find using it?

Integration
How did the intervention fit in with your normal workflow?
How welcome were any adaptations you had to make?

Behaviour Change
Do you think your behaviour has changed following the intervention? Do you intend to maintain those changes?
Can you tell me about those changes? Do you think those changes have affected patient care? How will you maintain them?

FIT -Cues
Did the feedback change your focus? What were the positives and negatives of this?

FIT-Task
How did the intervention affect the demands of your clinical work?
How did you respond to any changes in demand?

FIT -Individual
How ready do you think the team was to use the feedback intervention?
What were the barriers and facilitators within the team to using the intervention?

FIT -Meta-task process
How did receiving the feedback make you feel? Did this affect your work life?

FIT -focal processes
How did the feedback affect your motivation to address physical health?
What were the drivers of this change?

FIT task details
Did the feedback stimulate learning?
How did you go about that learning? 8 interviews were arranged with those who responded positively (n=4). DC conducted these interviews via videoconferencing and transcribed the results. Two researchers (DC and TW) independently coded 2 of the interviews and then met to discuss. From the discussion, all interviews were recoded and putative themes developed. At a further meeting, the codes and final themes were agreed.

Measurement
Indicators were selected with the team to be clinically meaningful and measurable using the existing data structure. We chose as outcome measures, the number of patients under the clinical care of the team who had their BMI and BP recorded in the last year. This is in line with NICE guidelines and was considered achievable in the time of the study. Table 2 describes how the data is derived from the EHR.

User Involvement
The intervention was extensively co-produced with clinicians and service users. Following the development of an initial prototype dashboard in August 2020, DC presented the dashboard to a meeting of consultants in psychosis, the trust board and the 'Quality Centre' -a meeting of senior clinicians involved in quality improvement -and to a clinical team in their meeting to refine the dashboard. I also presented the dashboard to the Psychosis Clinical Academic Group Service Users and Carers group who gave further feedback and approved the design of the intervention. During monthly meetings with the intervention team and another clinical team who was using the dashboard for another purpose, we (the author and CogStack team) acted on the feedback given to iteratively refine the dashboard.

Analysis
Data was analysed using difference-in-difference analysis [31], as it allows for the nonrandom allocation of intervention and uses differences in change between groups to infer the intervention effect. Baseline data was compared to ensure that both groups have parallel trends prior to the experimental period.
Each data instance was assigned a binary variable as to whether it represented a patient in the treatment group or not, and whether it represented a post-intervention or preintervention datapoint. Data-completion was converted to a binary variable (either completed or not), completed BMI and BP data was also used. Logistic regression was then used to calculate the coefficients of treatment-group and time against the outcome of datacompletion. Linear regression was used to calculate these coefficients against the actual outcomes of BMI and BP. Regression was completed using the logit and ols packages from the statsmodels python library (v0.12.2) [32].
Basic power calculations assuming a 16% improvement in data completion in the intervention group and 5% increase in the control group suggest that the sample size of approximately 200 patients per group is sufficiently powered to detect a change at 5% significance.
SUS response analysis was performed using standard methodology [30], with each question scored from 0-4 with higher scores indicating greater usability.
The interviews were performed according to the schedule described in table 1. This was then transcribed and coded independently by 2 researchers (DC and TW). Codes were then grouped together in themes. Both coders then met to discuss and revise their coding and themes. DC then collated the feedback from this meeting to create a final set of codes, which was discussed and agreed in a further meeting. The final themes were then based on these codes.

Ethics
Ethical issues relating to the use of patient data for the intervention and analysis as part of the evaluation were considered, as were issues relating to the handling of information gathered from clinicians.
Robust procedures have been established for the use of CRIS and Cogstack patient data, including procedures for storage, transfer and retrieval. Patient identifiable data was used as part of the intervention and it was made clear that clinicians should only access patient data for patients under their direct care. These procedures were approved by the CogStack oversight committee. 10 For our analysis of an anonymised patient dataset and handling of clinician feedback data, we have compared our study design against the KCL ethics committee tool to decide whether research ethics approval is required (appendix 4). As the study evaluates the rollout of a service independent of the study, and uses data collected routinely from that service, we concluded that formal ethical approval is not required.
The VIEWER dashboard was presented to the relevant bodies including information governance and trust leadership and permission was given to pilot the intervention.
All aspect of the context, intervention, analysis and interpretation are reported according to TIDieR [33] and SQUIRE [34] guidelines.

Usability
17 users completed the SUS, 16 of whom indicated that they had used VIEWER. The person who had not used it was excluded from the analysis. The overall result was 74/100, indicating 'good' usability [35]. Four respondents scored the system over 80, indicating potential promoters of the system and 2 respondents with scores below 55 indicating potential detractors [36]. Both potential detractors provided free-text responses indicating that they found it useful and had used it to improve patient care, but wanted more training.
Asked about barriers, 2 described more training needed, 2 that filtering should be more intuitive, 2 described difficulties logging in, 1 felt that limiting the scope to patients with psychosis was a barrier, and 1 was unsure how the data could be used to improve practice. 6 respondents did not identify barriers to usage. 12 respondents said that they were using it to improve patient care, with 1 saying they had not used it sufficiently yet and 1 saying they did not think it was appropriate for them. 2 left this question blank.   Figure 4 shows the data-over-time completion rates and mean BMI and BP for both groups at baseline. Both graphs confirm a parallel trend assumption, meaning that a difference-indifference design is likely valid, data completion is lower in the intervention group.  Table 4 shows the results of logistic regression with data completion of BP and BMI respectively as the outcome variable. The is_after variable is a binary variable with 1 representing a post-intervention reading and is_intervention is a binary variable with 1 representing a reading from the intervention group. Is_after:is_intervention represents with interaction between time and intervention and therefore indicates change correlated with the intervention. The data shows a potentially significant negative correlation corresponding the intervention group in both BP ( p<0.

Thematic Analysis
Of the 15 clinicial members of staff 4 agreed to take part in interviews -2 care co-ordinators and 2 consultants. Following coding, themes emerged that could be categorised as positive or negative, intrinsic to VIEWER or systemic (barriers in the wider system and realised or unrealised (see figure 5 and appendix 5 for more detail)

Trends
Overall trends are shown in figure 4. There were 94 statements indicating barriers to change and 50 positive statements around change. Respondents AA and BB expressed notably more positive statements (30 and 12 respectively) with respondents C and D expressing more negative statements (28 and 44). The majority of the positive statements (28/50) were unrealised positives and the majority of barriers (62/85) were systemic barriers rather than barriers relating to VIEWER. 9 statements were coded as general negative statements and 14 related to statements that the intervention was not used or useful, without greater specificity.

Working Patterns & Task Demands
All respondents said that working patterns were busy which allowed little time and energy for more data-driven conversations. 1 respondent expressed that VIEWER reduced the demands of work by increasing efficiency and enabling a more proactive approach, however 2 respondents felt that using VIEWER would increase the demands of their work. This was both the time taken to use the platform and because VIEWER would be likely to identify further work to be done. "So at the moment you're selling something that's going to increase … workload and that is quite a hard sell. You have lots of people… they're just trying to get through the day." Familiarity Two respondents articulated that they have little time for experimenting with new ways of accessing information, and so use their 'go-to' sources of information: "someone said to me 'the easiest way is the way I know', even if you map it out and its 10 times the time and actually is more difficult".. One respondent stated that they thought that this factor would diminish over time as clinicians become more familiar.
Training needs Some of the caution was expressed as training needs for clinicians. 3 clinicians reported that they felt that the response would be improved by training, either in how to use the system itself or in how to use data as part of clinical practice. There were diverse views expressed as to the role of training with one respondent asking for a practical opportunity to use the system, one respondent describing the need to build 'good habits' around data and using the opportunity to re-imagine work using data and another describing having 'a safe place to fail' to allow people to experiment for themselves with data.

Relevance to clinical practice
The most notable difference between the two clinicians who gave more positive responses and those who gave more negative responses was that the more positive clinicians were able to give examples of relevant questions that had, or could be, answered using the VIEWER platform. They also described how data improves practice by drawing attention to problems and providing baseline data to test change This contrasted with those with a more negative view of the platform who said that they did not see the relevance of the data represented to clinical practice.

Facilitating Intervention
One notable barrier identified was that there were rarely specific services or interventions to address physical health, meaning clinicians saw less value in recording the data. One respondent described using VIEWER to find eligible patients for a research programme aimed at treating obesity in women with psychosis, however no other examples were given. 2 respondents described a future ideal in which interventions were linked to aspects of patient data, with 1 hoping that VIEWER would empower clinicians to take part in advocating for and developing these interventions. "the way that we are inspiring the use of [VIEWER]

Roles and Responsibilities
All respondents said that they felt that data could be used to improve health and that the data represented in VIEWER was helpful to the health system. However, there were varying views as to how this fitted into current roles and responsibilities. Only 1 respondent felt that the role of reviewing and reacting to the data could be regularly performed by current members of the clinical team, with 2 respondents feeling that the information was more useful for managers or senior clinicians and 1 respondent who felt that a nurse practitioner role would be needed to take primary responsibility for reviewing the data.

Changing Focus
Only 1 respondent felt that VIEWER had changed focus already and described conversations within the team and between the team and partners in primary care that had been datadriven allowing for more quick and accurate descriptions of the status quo. 1 respondent felt that this was a likely positive outcome of using viewer and 2 expressed that they did not think that it had changed focus and did not think that it was likely.

Motivation
All respondents were asked about motivation and 2 answered positively, suggesting that motivation was improving to address physical health issues, both saying that a degree of intrinsic motivation was required. "Putting the physical health stuff first isn't going to motivate people. What's going to motivate people is the kind of answering the question, "what do I need to do today for my patient?" One respondent expressed the hope that this could be part of a virtuous circle, of data being presented back to clinicians leading to better data input. One respondent said motivation would be aided by allowing clinicians to decide how they want to use VIEWER, rather than suggesting specific metrics. Two respondents said that they didn't feel that motivation had increased, although one of these described curiosity about the data.

Technical Barriers
All respondents except one (who described regular use of VIEWER) were asked what limited their use of viewer and only 1 brought up technical barriers, with the main difficulty described being forgetting how to log-in. Of note, only 2 respondents had used VIEWER, with the other 2 having seen it demonstrated. This may limit insight into technical barriers.

Meta-Task Processes
When asked directly, all respondents said that they felt that the data was welcome and that greater use of this sort of work to support clinical care would be, or was, well-received by clinical staff. However, one respondent did describe feeling that data was often used as a means of assessing and managing, rather than helping staff. "[management]… don't tell me I need to improve the recordings of BMI for my clients until you tell me how I'm going to do that… Until you have that discussion with me about, what stopped me in the past" 2 respondents also described having large numbers of tasks relating to data-entry, with 1 describing this as leading to a more negative perception of the data.

Findings
This evaluation of an eA&F intervention in a community mental health setting found that it did not make a significant difference in outcomes. Whilst it did not lead to an improvement in BMI or BP -something that might be expected given the comparatively short study period -it also failed to result in improvement in data completion, which we felt would be 17 more achievable. This was despite 6 members of the team making some use of the platform.
Feedback on the usability of the system and technical aspects was good, with a usability score of 74/100 and minimal barriers to the technical use of the platform described and clinicians identifying that the intervention was or could be used for patient care.
However, more in-depth feedback from interviews revealed significant barriers. The most consistently described barrier was workloads that are busy and into which data is not wellintegrated. Concern was expressed that reviewing the data would add to the workload and would also generate additional tasks. Respondents also expressed concerns about the relevance of reviewing the data, and whether it helped their current workflow -particularly as it was not always clear what interventions the data would allow patients to access. They also expressed the need for training, both to master technical aspects of the intervention and to develop the habit of using data in this way, as they felt a lack of familiarity with the system was disabling them currently. It is also important to define whose job it is to look at the data and to empower clinical staff to respond to it.
Respondents did however, largely feel that there was a greater awareness of data and how it could be used to improve clinical work and how it could be used to enhance motivation and build the case for enhancing physical healthcare of patients with mental health problems. There were examples of this happening, although it is clear that these have not had sufficient time to produce measurable change.
Although we were unable to demonstrate change in patient care, the study overall gives some cause for cautious optimism with a high SUS score and a majority of positive statements suggesting unrealised potential, implying that clinicians saw the value in the intervention even if they had not been able to make use of it. The majority of negative statements described barriers within the wider system rather than barriers intrinsic to VIEWER.
More generally, this study has also demonstrated the utility of using FIT to identify barriers and enablers when exploring implementation of eA&F interventions, with the approach having generated a wide range of factors.

Strengths and Limitations
This study explored the effectiveness of the VIEWER eA&F intervention using mixed methods, allowing quantitative evaluation of the overall outcome and a more thorough understanding of underlying factors behind the outcome.
The study was limited by the numbers that we were able to include in the study being limited to 1 team, meaning that we were at the lower limit of the numbers required to power the study to find a difference. The non-random assignment of teams was also a limiting factor, although this was necessary to get permissions for the study and the study design -using difference-in-difference methodology -is a well-established methodology in such circumstances. Equally, the short time over which the study was conducted meant that there was comparatively little time for clinicians to change behaviour, which may have led to under-estimation of the impact of viewer on clinical practice. A potential confounder was that the intervention team did receive input from the author and the Cogstack team to help them use VIEWER and interpret the data. Such data-driven meetings could have altered behaviour irrespective of the introduction of VIEWER, leading to improvements in the data. The lack of change over time does suggest that this was not the case. We were also limited in only having access to secondary care data, whilst some physical health data is likely to be held in primary care.
For the interviews and thematic analysis, we were also unable to recruit as many clinicians as we would have liked, limiting the perspectives that we were able to account for and potentially leading to bias in the recruitment with only those who were more invested in the intervention taking part. The interviewer was also known to the team and had presented and taken part in discussions about the intervention. This may have led respondents to moderate their responses.
The results of the study are of limited generalisability, as they are intended as an evaluation of a specific intervention.

Study in Context.
There have been many studies that evaluate eA&F interventions, with 49 of sufficient quality for inclusion into a Cochrane review [10] However, this is the first, to our knowledge, taking place in a community mental health setting. To date, eA&F has shown modest outcomes when introduced as a stand-alone intervention, with greater improvements achieved when additional resources are offered. Thus, our outcome data is disappointing but consistent with previous findings. We have added to the knowledgebase is evaluating the intervention using FIT to explain our results. This has demonstrated the feasibility of this approach and will allow further exploration and refining of our model.

Conclusions and further work
In conclusion, despite good usability scores, we were unable to demonstrate the effectiveness of VIEWER in improving monitoring of physical health in a community mental health setting. However, feedback from clinicians was positive and identified unrealised potential, as well as barriers that need to be overcome.
The study highlights the importance of non-technical aspects of the intervention, ensuring a sense of clinical ownership of the data, adapting workflows and systems that take account of the data and ensuring people have the skills and confidence to use data in daily practice.
Areas for further work which are likely to improve the impact of the intervention include 1. Further work with front-line clinicians to understand their workflow and how the information represented in the data can fit in with this. 2. Building a model of clinical ownership of data, with dedicated clinician time for understanding the data and using it to support quality improvement interventions 19 3. Identifying in more depth clinicians' training needs and consider developing training in technical aspects of using VIEWER and around interpreting and responding to data. 4. Identifying at a service level, how physical-health data can be linked to interventions, giving clinicians a greater sense of the relevance of this data.
Following our initial evaluation, the trust has established a 12-month pilot of a Physical Health Check Liaison Team with VIEWER embedded into standard workflows, which will allow us to evaluate how A&F can be used to support teams whose workflows are built around data.
More generally, further studies looking into how FIT and other theories on A&F interventions can be used to inform intervention design would add significantly to the evidence-base and allow for theory to better-inform eA&F interventions.

Negative/ barriers Things external to intervention
Working patterns -Busy but not datadriven.
Integration of data entry into workflow (negative) expresses lack of motivation to use the intervention enabling change of focus (negative) the intervention has not led to a change in focus

Meta-Task Processes
Feeling performance-managed expressing the view that the data is applied punitively rather than to help Negative Experiences of clinical systems describes negative emotion about data processes

General Negative
Utility (negative) expresses that they or someone they know does not derive utility from using it without further explanation usage (negative) expresses they or someone they know doesn't use it without further explanation unsure of benefits Expresses probability that benefits are minimal Learning response (negative) respondent states learning has not taken place as response to data

Supporting Roles and Responsibilities
Information utility (negative) -stating that the information doesn't answer relevant questions Alternative Information -describes alternative information that may be more useful Relevance to clinical practice (negative) Need for ownership -Describes the need for clinicians to be responsible for reviewing the data Technical barriers -describes aspects of using the system that are a barrier to starting Intervention as work-generating Intervention as work-generating describes that using the intervention generates extra tasks to perform

Roles and responsibilities
Non-clinical relevance expresses the view that it is more useful for resolving non-clinical, service-level questions Clinical ownership (new roles) suggests new clinicians are needed whose job is to look at and respond to data Clinical ownership (aspirational) suggests clinicians need to be encouraged to look at and respond to data. Pre-existing relationships describes that relationships needed to respond to the data already exist Curiosity about data Curious positivity expressing unrealised potential of the intervention to help with problem Aligns with interests expresses the view that looking at the information facilitates having an interest.

Technical Abilities
Positive Technical Abilities expresses a view that they or colleagues have the skills to use the intervention Familiarity Awareness (positive) describes self or team knowing what the system is Familiarity as enabler expresses the view that familiarity increases confidence with using it 28 Evolving usage expresses that people will decide how to use it over time Responding to data Information utility (aspirational) describes building interventions that address problems described in the data Empowering clinicians describes that clinicians feel more empowered as a result of having access to data Motivation Individual preferences (if) expresses that individuals will decide if they use the intervention Blank-canvas describes users defining their own use for the system Intervention enabling clinical work Changing focus Physical health use describes using the intervention to draw attention to physical health Facilitating conversations Between-team conversations describes facilitating conversations with external teams/people Team-level use describes facilitating conversations within the team Data-driven team conversations describes team meetings where system is used to look at data. Improving relationships (external) describes using data to develop shared understanding with other teams.

Facilitating intervention
Proactive care describes the platform facilitating care earlier in the illness cycle Accessing interventions describes the system outputting a population to access intervention Defining populations describes using it to create lists of patients from a population of interest Improved awareness of data importance expresses increased awareness of the importance of data as a response to the intervention Defining areas of focus describes using the system to look for areas to attend to Motivation response (positive) monitoring improvements in data motivates change Behaviour change (positive) describes that the information has led to a change in behaviour 29 Gathering baseline data describes using the data to develop a baseline of current practice Improving data (aspirational) describes a hope that viewing the data will lead to it improving Improving quality of data (aspirational) describes aspiration to improve the quality of data Using data to improve data input expresses that feeding data back to clinicians will encourage them to input data more effectively. Learning response (aspirational) respondent states learning will take place as response to data .