Reach, Adoption and Sustainability of a Mobile Clinical Decision Support Tool Paired with a National Quality Improvement Project Targeting Emergency Department Febrile Infant Management

Background: Electronic clinical decision support (ECDS) tools are often developed within quality improvement (QI) projects to increase adherence with the latest clinical practice guidelines (CPGs).However, scalability and sustainability of ECDS beyond the time and location of their associated project are very limited. Deploying ECDS using a mobile app (mECDS) has shown the potential to be a viable method of overcoming these limitations. However, it is unclear what pattern the spread of uptake and use of such a tool might follow. Methods: In 2016, our team released a freely available mECDS as part of a national multi-site project entitled Reducing Variation in Infant Sepsis Evaluation (REVISE). For this study, we evaluated trends in weekly mECDS usage measures dened as 1) REVISE metric related screen views (MetricHits), 2) unique designated market areas (DMA) where the mECDS was used (Unique DMAs), 3) density of use or MetricHits/Unique DMAs (HitsPerDMA). Linear regressions were performed to examine the app usage trends measures across three time periods (during REVISE, 1-year post, 2-year post). Separate regression analyses were performed among DMAs that contained a REVISE site and those that did not. The number of REVISE sites in the DMA, the number of children’s hospitals in the DMA, DMA population, and season were also evaluated as confounding factors. Results: Strong growth in the number of unique DMAs and MetricHits occurred during the period of 1-year post the REVISE project. The overall usage continued to be relatively stable during the period of 2-years post. HitsPerDMA had stronger growth in DMAs with a REVISE site than those without. MetricHits were higher in DMAs with a larger population, more REVISE sites, and more children’s hospitals. There were also more MetricHits in the summer than in the winter months. Conclusions: Both temporal and spatial increases in mECDS app usage were found as evidence of a contagion method of spread. Other confounding factors may also play a role in app reach, adoption, and sustainability. Further research is needed to determine the factors driving passive diffusion and its impact on clinical practice and patient outcomes. support (CDS) tools increase adherence to clinical practice to guidelines, but are resource intensive to build and dicult to scale up and sustain use. Releasing CDS tools in the form of a mobile app (mECDS) has the potential to aid in their sustainability and scalability. This study assessed the use of a mECDS during and after its corresponding project ended. The mECDS continued to achieve growth in use over expanded U.S. regions in the rst two years after project completion. Growth in usage was stronger in areas with more project sites, children’s hospitals,larger populations, and in the summer.


Introduction
Clinical practice guidelines (CPGs) are a foundational tool in most quality improvement (QI) projects. Electronic clinical decision support (ECDS) tools are often created and implemented within QI projects to increase adherence with these CPGs. ECDS tool usage during these projects correlates with increased adherence. Still, challenges remain in the sustainability and scalability of ECDS tools beyond the location and intervention period of the initial project (1). In the prior studies evaluating post-QI use, ECDS tool utilization either dropped off rapidly or plateaued after the project completes (2,3). Additionally, the development of ECDS is often labor-intensive, and the resulting tool faces considerable barriers to reach more users beyond the original QI project sites given the variety of system standards in place (4).
Mobile device-based electronic clinical decision support (mECDS) tools are one potential method of overcoming such barriers to scalability and sustainability (5). However, it is unclear what pattern the spread of uptake and use of such a tool might follow when freely available. Only two prior studies have evaluated usage trends of clinical decision support deployed in this manner (6,7). Q'Reilly-Shah et al. looked at when, where, and how a free anaesthesia calculator app was used in the rst 10 months post release. They found that app use peaked in the early morning hours, occurred more often in low-income countries, and primarily involved content regarding pediatric patients (6). Wright et al. examined the rst 6 months of usage patterns of an app containing electronic verisions of Médecins Sans Frontières' common condition guidelines, and found use occurred in 150 countries with initial rapid growth in userbase in the rst 2 months and sustained levels of use thereafter (7). The particular content accessed varied signi cantly between countries. Neither study evaluated use or reach beyond the rst year or geographic variation in use at geographic levels smaller than country.
In November 2016, our research team deployed our rst mECDS tool in conjunction with a one-year national multi-site QI project: "Reducing Excessive Variation in the Infant Sepsis Evaluation (REVISE)" (8). The REVISE project focused on getting clinicians at hospitals to better align with the latest febrile infant care recommendations. During the course of the project, we found cumulative mECDS tool use to be associated with increased odds of appropriate admission, increased odds of appropriate length of stay, and reduced odds of chest x-ray for well-appearing febrile infants (9). The mECDS tool was promoted to the sites via the project's introduction webinar, but the app it was published within was freely available for download from both Google Play and Apple's App Store. This availability allowed for the potential for use to occur well beyond both the site locations and the time period of the active REVISE project intervention. Given the substantial time and nancial investment that goes into developing ECDS, long-term data on reach and adoption are critical for determining whether these high costs are offset by sustained gains in adoption and reach. Having a better understanding of the geographic and temporal usage patterns of this mECDS tool will help determine the potential sustainability and scalability of such a tool deployed freely and without continued promotion. For this study, we compared trends in tool use across three time periods (during the REVISE intervention, 1-year post intervention, 2-years post intervention) within designated market areas (DMAs) that contained a REVISE site and those that did not. The aim of this study was to determine, during and post project(REVISE), whether mECDS tool use (1) increased over time, 2) spread to other DMAs, 3) differed between DMAs with and without REVISE sites, 4) was associated with other confounding factors at the DMA and temporal-level.

Data And Methods
Reducing Excessive Variability in the Infant Sepsis Evaluation (REVISE) REVISE was a national QI initiative sponsored by the American Academy of Pediatrics Value in Inpatient Pediatrics (VIP) Network that was designed to improve and standardize care for infants 7-60 days of age presenting to emergency departments and/or inpatient units with fever of an unknown source. More than 125 hospitals in community and university-based settings across the U.S. enrolled in the project, with each hospital receiving a "change package" that included evidence describing best practices, QI education, clinical tools such as order sets and academic detailing materials, a centralized data collection tool, dedicated coaching sessions, and access to the mECDS tool. The intervention period for REVISE ran from December 2016 until November 2017. The app was released for free download from both the Google Play and Apple App Store in November 2016 and promoted to all sites in the kickoff webinar presented in December 2016. Full details of the of REVISE project, the development of the corresponding mECDS tool, and its impact on project metrics can be found in our previous publications (5,8,9).

App Analytics
App usage data were collected from December 2016 to November 2019 using Google Analytics (10). Google Analytics records app usage at the individual device level in terms of the number of devices that the app has been opened on (users), the number of times the app has been opened or used (sessions), how long the app was used for each session (duration), and the number of times each button or screen within the app was touched (events). Google Analytics also captures the latitude and longitude of the cell tower or WiFi hotspot that a device is receiving data from or is registered to. These location data, tagged to a speci c device, have been completely de-identi ed and aggregated at various geographic levels (from city to country). Thus, we performed a spatial analysis to capture the incidence of app usage across geographic regions at the time it occured. For the purposes of this analysis, we chose to analyze the aggregated data at the designated market area (DMA) level because the coverage of DMA is most closely aligned with the catchment area of hospitals. We derived three weekly app usage statistics, including a measure of geographic reach entitled unique DMAs (a total number of DMAs with app use within the week), a measure of usage activity volume entitled MetricHits (total number of project metric-related screen views occurring within the week), and a usage density measure entitled HitsPerDMA (MetricHits divided by Unique DMAs).

Covariate Derivation
Since the overarching target of REVISE was to increase febrile infant guideline adherence in inaptient and ED settings with less experience in pediatric medicine, the relative concentration of children's hospitals (e.g. the hospital type least likely to need the app) was derived for each DMA. The geolocations (latitudes and longitudes) of all U.S. children's hospitals were obtained from the 2018 American Hospital Association (AHA) Survey (11). Children's hospitals were aggregated as counts by DMA using an intersect spatial join with DMA boundaries obtained from the U.S. Census Bureau's repository of geographic shape les (12). Given the high potential that use would cluster in areas with a high proportion of REVISE sites, a density measure of REVISE sites per DMA using the same method was created. To account for the seasonal nature of febrile infant epidemiology, season of use was derived by binning the month based on the Northern Meteorlogical Seasons: Winter (December-February), Spring (March-May), Summer (June-August), and Fall (September-November) (13). Finally, to account for population density, DMA population was also obtained from the U.S. Census Bureau's DMA shape le.

Statistical Analyses
In the univariate analysis, three app usage measures were compared separately by study period (REVISE To determine crude differences in weekly usage, median values were compared by the study period using Kruskal-Walis tests. To determine how usage diffused over time, separate linear regression models were performed where the app usage measures (DMAs, MetricHits, and HitsPerDMA) were dependent variables and the study week (# of weeks since the study starting date of 12/1/2016) was the continuous explanatory variable for each study period. We analyzed the temporal data at a weekly basis because weekly data have less noise and better visual presentation than daily data, and because there was no signi cant variation pattern within a week. We further tested the interactions of study week × time period for each measure to assess heterogeneity in slopes of the weekly trends across the three time periods. Linear trends were reported because we did not identify signi cant quadratic or cubic trends in any models. To delineate the potential for uptake and sustainability in the absence of project participation, we divided DMAs into two strata by REVISE participation. Strati ed analyses were conducted separately among DMAs that contained at least one REVISE site and among those that did not contain a REVISE site. Due to the correlation among the covariates (multicollinearity), we assessed the impact of confounding factors (explanatory variables) including seasonality, number of REVISE sites in the DMA, number of children's hospitals in the DMA, and DMA population separately on temporal trends in MetricHits among all DMAs. Data management, aggregation, visualization, and analyses were conducted using R version 3.6.4 (R Core Team, Vienna, Austria).

Results
Over the course of three years, usage of the mECDS tool occurred in 93% of U.S. DMAs (196/210) and accumulated 263,514 MetricHits. Total MetricHits in each DMA ranged from 0 to 41,888, with a median of 258. When use in all DMAs was compared across the study period, both median weekly MetricHits and unique DMAs with app use were found to be higher in each subsequent study period (p < 0.0001, Table 1). However, weekly HitsPerDMA was found to be signi cantly lower in the years after REVISE was completed than during the active QI intervention (p = 0.003). When the analysis was restricted to just DMAs that contained a REVISE site, both MetricHits and unique DMAs were still found to increase over three study periods (p < 0.0001) while HitsPerDMA remained relatively stable (p > 0.05). In DMAs without a REVISE site, both MetricHits and unique DMAs were again found to increase over three study periods (p < 0.001) while median weekly HitsPerDMA were found to be signi cantly lower in both years after the completion of REVISE (p < 0.001). DMAs with a REVISE site accounted for the majority of MetricHits in all study periods (81% REVISE, 75% 1-year post, and 73% two years post), but only two-thirds of the DMAs during REVISE (65/156) and only half in the 2 years after (65/183 and 65/188 respectively). When the analysis was restricted to just DMAs with a REVISE site ( Fig. 1-2, row 2), the same trends in usage were found as the overall trend, except for the downward trend of HitsPerDMA in the REVISE period which no longer reached signi cance (weekly change: -0.07 95% CI [-0.17 to 0.04], p = 0.20). Among DMAs without a REVISE site ( Fig. 1-2, row 3), MetricHits followed the same trends as in the model including all DMAs, but unique DMAs were only found to increase by one every four weeks in the REVISE period (weekly change: 0.26 95% CI [0.16 to 0.36], p < 0.01) and were not found to increase at a signi cantly higher rate The analysis of confounding factors showed that MetricHits were higher in the summer than in the winter and increased with the number of REVISE sites, the number of Children's Hospitals, and the population of the DMA (Table 2). Independent adjustment for all covariates reduced the effect estimates for the weekly rate of change in all study periods. The downward trend found during the active intervention was still signi cant with each adjustment, but the upward trend for 1-year post was found to be insigni cant after adjusting for either seasonality or the number of children's hospitals. The upward trend found for 2-years post was not signi cant after adjustment for any of the covariates.

Discussion
Development of electronic clinical decision support tools usually requires substantial resources, including thousands of hours invested in research, software development and physicians' engagement. However, the intervention period is often limited. Deploying these tools using a free mobile app allows for scaling the impact of such tools beyond the intervention time period and location of the associated project. This study extended our previous research to measure the usage of mECDS tool 1 and 2-years post the intervention period. Our ndings highlight several novel insights on the possible sustainability and scalability of the mECDS tool.
First, we found strong growth in the number of unique DMAs where the mECDS was used and the total metric-related screen views (MetricHits) during the period of 1-year post project. The overall usage continued to be relatively stable during the period of 2-years post project. Our ndings demonstrated that the usage of the mECDS tool could be sustainable even with no continued interventions. This is in contrast to ndings from recent studies of non-mobile ECDS tool use after project completion. McCullagh et al. found that an electronic health recordbased rule implemented across primary care clinics in New York was used in over 80% of applicable encounters, but this usage fell to 45% by the end of the rst year (2). Patel et al. found the improvement in screening rates for cardiovascular risk factors to persist at the same level, but not increase, after their toolbased intervention trial concluded (3). Access to the tool was expanded after the trial, and self-report assessments found consistent usage levels. Only one other study, to our knowledge, has assessed usage of mECDS over time. An assessment of a pediatric anesthesia tool found that usage of the tool decreased over the course of the 1-year rotation of the cohort of students they focused on (14). However, students are a more unstable population as they routinely change practice locations, which may have led to the drop in use noted in this study.
Second, we further strati ed our analyses by DMAs with a REVISE site and those without. The results show that gains in the number of unique DMAs and usage (MetricHits) appear to be primarily driven by spread to new areas given density measure. The growth in unique DMAs without a REVISE site was over two times that of DMAs with a REVISE site during the period of one year after the intervention. The results are promising given that there were no investments in dissemination after the study period and the mECDS tool continued to expand into other sites and gain usage, suggesting that the tool can be scalable with a low burden. Conversely, the density of use (HitsPerDMA) decreased over the course of the intervention year in DMAs without a REVISE site and then leveled off in the following years, but in DMAs with a REVISE site density was stable in the intervention year, increased 1-year post, and leveled off again 2-years post. This nding suggests that tools released in this manner have the ability to spread to new areas on their own momentum, but they may need more targeted marketing to get established in each area.
Third, the reduction in the weekly change in MetricHits with adjustment for DMA population and number of children's hospitals in the DMA suggests that much of the increase in use 1 and 2-years post could be driven by uptake in children's hospitals often located in larger DMAs. These institutions are relatively high resource settings when it comes to pediatric guideline tools. This nding is contrary to what was found in the two studies looking at decision support use in aggregate over the rst few months post release, which both found that usage was more likely to occur in lower resource countries (6,7). This nding may have been driven by the over-representation of children's hospitals in VIP projects given that 2% of all hospitals are children's hospitals but children's hospitals comprised 13% of project-participating hospitals. It is also possible that clinicians working at children's hospitals were more likely to hear about the mECDS even when not particpating in the project given that they are more likely to attend the pediatric focused conferences at which the overall results of REVISE and the impact of the mECDS were presented. Time trends in tool usage were also reduced when accounting for both the number of REVISE sites and season of use. The reduction based on the number of REVISE sites in the DMA makes sense for similar reasons as the number of children's hospitals. Clinicians at REVISE sites are more likely to hear of and begin using the tool given their hospital's participation in the project and thus account for some of the growth in usage over time. Higher usage in the summer than in the winter ts with the epidemiology of febrile infants as this presentation tends to peak in August (15). A slight upward trend at this time can be seen in each study year.

Strength And Limitation
This study has several strengths. The method of deployment of this tool allowed for complete and continuous tracking of when and where it was used well beyond the con nes of its associated project. Being deployed through a national QI project also allowed for the tool to have su cient reach and adoption levels to detect trends by both location and time. However, the method of deployment did hinder our ability to detect impact on outcomes beyond the original project timeline. During the project, sites submitted monthly case data via chart abstraction into a secure survey system which allowed for association with usage to be derived by pairing site location to usage levels. This chart abstraction, unlike use of the tool, did not continue after project completion. Characteristics of those using the tool were also not able to be assessed as the tool required no registration and thus this information was not available.

Conclusion
ECDS deployed via a freely available mobile app has the potential to achieve a sustained impact beyond the original project period. Increasing growth rates, both temporally and geographically, beyond the original project are evidence of a social contagion method of spread (i.e., informal person to person recommendation or demonstration in person or online) (16). Further research is needed to determine the factors driving passive diffusion, whether such reach and uptake levels are replicable, and whether this continued use has an impact on clinical practice and patient outcomes.