What is the Clinical Impact of an Analytic Tool for Predicting the Fall Risk in Inpatients on Nursing Sensitive Outcomes?

Objectives Patient falls remain a common cause of harm in acute-care hospitals worldwide. The purpose of this study was to explore whether an electronic analytic tool for predicting fall risk can improve clinical outcomes, including reducing inpatient falls in an acute-care setting. We performed a double-blind experimental study that included a nonequivalent control group in 12 medical-surgical nursing units at a public hospital between May 2017 and April 2019. The intervention was the provision of risk prediction information generated by an analytic tool using nursing data obtained from the hospital’s electronic health record system. The primary outcome was the rate of falls, and secondary outcomes included fall-related injuries, the daily frequencies of nursing activities provided to patients and predened process indicators. During the study there were 42,476 admissions, while 707 falls and 134 fall injuries occurred. The fall rate differed signicantly between the two groups (1.79 vs. 2.11, t = 2.13, P = 0.0383), considering the interrupted time series analysis, a signicant absolute reduction of 29.73% was noted in the intervention group (z = − 2.06, P = 0.0391), versus a 16.58% reduction in the control group (z = − 1.28, P = 0.2000). The injury rates were not signicantly different (0.42 vs. 0.31, t = − 1.54, P = 0.1306). Patient-level-adjusted logistic regression showed a signicant group effect on falls. Process outcomes on universal precautions were signicantly better at the control units from baseline, while risk-targeted interventions increased slowly and more in the intervention group over time. leading to positive, qualitative changes in Multihospital with ecient both and fall-related (14), but will introduce it briey here. To identify concepts of fall risk factors and prevention care, two international practice guidelines (9, 12) and two implementation guidelines (23, 24) on preventing inpatient falls were used. Two standard vocabularies, the Logical Observation Identiers Names and Codes (25) and the International Classication for Nursing Practice® (25, 26), used to represent the concepts in the prediction model, which was then represented with a probabilistic Bayesian network. from EHR the two groups and a longitudinal analysis of cluster-level data were used; generalized least squares (GLS) estimation to examine the association between group and outcomes, and regression with a xed-effect model using least-squares dummy-variable method. To compare rates of falls pre- and post-intervention periods by group, a cluster-level interrupted time series study with calendar month as the unit of time was performed. This quasi-experimental design accounts for temporal trends while examining the association between introduction of the analytic tool and outcomes; these associations were analyzed using segmented regression analysis (35). We t negative binomial models; we included 3 variables to measure the relationship of time and the rates of falls: (i) a continuous variable to represent the underlying temporal trends; (ii) a dummy variable for the date after May 1st, 2017, to determine the change in rate of falls related to the intervention; and (iii) a continuous time variable beginning on that date, to represent the change in slope. The coecients of the second and third variables indicated whether the intervention had an immediate or ongoing effect on rate of falls, respectively. For the patient-level analysis, adjusted logistic regression was used. The process indicators were also compared between groups using chi-square test.

Introduction and differences of nursing interventions provided to patient at-risk and not at-risk days. According to clinical observation, limited number of universal precautions were repeated depending on the scores of a risk assessment tool, rather than risk-targeted intervention.
Given the increased adoption of electronic health record (EHR) systems over the past decade, it may be possible to use of nursing assessment data which routinely captured through EHR systems to predict inpatient falls (13). In our previous work, a predictive analytic tool designed with probability technique performed better at discriminating at-risk and no-risk patients than did the existing fall risk assessment tools alone (14). As a sort of health care predictive analytics, nursing predictive analytics can be de ned to include information regarding the likelihood of a future patient event through risk prediction models that incorporate multiple predictor variables automatically from one or more sources of nursing-related data beyond what can be simply calculated by most nurses. Several studies have investigated variable selection, model development, and validation in nursing for predictive analytics (5,(15)(16)(17)(18)(19)(20). However, only a few studies have analyzed predictive models in real-world settings and explored the in uence or relationships with nursing sensitive outcomes. Several studies (15,16,18,19) have explored rescue failures, such as clinical deterioration alerts and the early detection of sepsis, and prediction of adverse event such as pressure ulcers. These studies produced mixed results. One (15) concluded that simple laboratory and vital-sign criteria were insu cient for improving outcomes in sepsis. Other studies (17,21) reported positive changes in outcomes. However, little have known about the clinical feasibility, values and nurses' responses on nursing analytic approaches and tools.
Recent growing availability of EHR data has great opportunities to rapid expansion of health care predictive analytic application (22). This study applied predictive analytics to inpatient falls and explored its relationship with patient and process outcomes in a real-world setting. This study hypothesized that knowledge of fall events that are likely to occur within 24 hours based on data routinely captured in EHRs would enable nurses to conduct multifactorial assessment and provide risk-targeted interventions to at-risk patients.

Development of Inpatient Fall Risk Prediction Model
We reported the methods and results of the development of a risk prediction model in detail previously (14), but will introduce it brie y here. To identify concepts of fall risk factors and prevention care, two international practice guidelines (9,12) and two implementation guidelines (23, 24) on preventing inpatient falls were used. Two standard vocabularies, the Logical Observation Identi ers Names and Codes (25) and the International Classi cation for Nursing Practice® (25,26), were used to represent the concepts in the prediction model, which was then represented with a probabilistic Bayesian network.
Using two study cohorts obtained from two hospitals with different EHR systems and nursing vocabularies, the model was tested. The model concepts were mapped to local data elements of each EHR system and two implementation models were developed for a proof-of-concept approach, followed by cross-sites validation. The EHR data included in the model were demographics, administrative information, medications, patient classi cations, the fall-risk-assessment tool, and nursing processes including assessments and interventions. The two implementation models showed error rates of 11.7% and 4.87% with c statistics of 0.96 and 0.99, respectively. The model performed 27% and 34% better than the existing Hendrich II (27) and STRATIFY (St. Thomas' Risk Assessment Tool in Falling Elderly Inpatient) (28) tools.

IN@SIGHT system
The validation-site model was implemented as the IN@SIGHT (Intelligent Nursing @ Safety Improvement Guide of Health Technology) system version 1.0 at a public 900-bed hospital located in the metropolitan area of Seoul (Republic of Korea) that used to the STRATIFY to assess fall risks for all inpatients. The analytic tool was integrated into the locally developed hospital EHR system that had been used for more than 10 years. The tool was deployed in six targeted nursing units on April 5, 2017 and 204 nurses received the patient-level prediction results daily. In this implementation process, the research team engaged with the chief of the nursing department, unit managers, unit champions, personnel of the department of medical informatics, and the patient safety committee. For 3 months before the system deployment, 3 sessions of education for the IN@SIGHT was provided to intervention group, which were followed by peer-to-peer education by unit champions at each unit. The nursing department decided to replace the existing STRATIFY with the analytic tool in this quasi-experimental study. Accordingly, the original model was customized by replacing the six data elements in the STRATIFY with proxy data elements in the EHR. The adjusted model consisted of 40 nodes and 68 links that showed an error rate of 9.3% and a spherical payoff of 0.92 with a c statistic of 0.87. Related work processes were rede ned and the existing fall-prevention documentation screen of the EHR was modi ed to facilitate data input (Fig. 1). The readjusted model showed an area under the receiver operating characteristic curve of 88.98%. The hospital decided to deliver the risk information in a dichotomized format with at-risk and no-risk categories at a cutoff point of 15% preserving high speci city of 89.4% and sensitivity of 66.1%.

Study Framework and Objectives
To guide the study, a conceptual study framework was developed from the nursing role effectiveness model (29). (Fig. 2) This study assumed the precise and up-to-date prediction on fall events within 24 hours will affect rates of falls as well as process outcomes. The STRATIFY has used to classify about 40 ~ 50% hospital days (HD) into at-risk, while the IN@SIGHT predict about 20% HD for at-risk days. The rates of falls in the site was around 2 per 1000 HD.
The purpose of this study was to explore the feasibility, values, and nurses' responses on the rst version (version 1.0) of IN@SIGHT. The following speci c research questions were addressed: 2. Does the predictive analytic tool affect nursing activities provided to patients?
3. How does the in uence change over time?

Study Design and Setting
The study design was a double-blind experiment involving nonequivalent groups conducted in 12 medical-surgical units of the hospital from May 1, 2017 to April 30, 2019 (Fig. 3). The pre-intervention periods were set 16 months before the introduction, which was the maximum time window retrospectively because of the changes of nursing sta ng according to government policy. The participating units were paired based on previous rates of falls and patient characteristics as best as we can. However, due to the limited number of units, neurology, geriatric, and surgery units were imbalanced. Patients who meet the following criteria were eligible; at least 18 years of age, and admitted for more than 1 day in departments other than pediatrics, psychiatrics, obstetrics, and emergency care. During the study we surveyed nurses in the intervention group every 6 months to monitor nurse-system interactions, such as their perceptions, knowledge and attitudes about fall prevention tasks, user satisfaction, and experience, which were reported in detail elsewhere (30).
The hospital's ethical committee waived the need to obtain consent from individual patients and nurses, which enabled all patients and nurses in the participating units to be included as study participants (IRB No. NHIMC 2016-08-005).

Comparator and Outcome Measures
Inpatients in the control group received the usual care using the STRATIFY. The modi ed nursing record screen for fall prevention in the EHR system was introduced to both groups. Hospital policies dictated that the same fall-prevention practices were recommended to both the control and intervention groups.
The primary outcome was the rate of falls per 1000 HD. We adopted the falls de nition of the National Database of Nursing Quality Indicator outcome metrics of the American Nurses Association (31). A fall injury was considered minor and more levels as determined within 24 hours by the hospital.
The secondary outcomes were the rates of fall injuries and process outcomes. The process outcomes are calculated from nursing activities provided to patients to prevent falls. These activities were categorized into 17 care components based on international guidelines (32). Regarding the process outcomes, the nursing indicators which were previously identi ed as eMeasurements of inpatient falls were used (33). These process indicators were used to determine whether nurse behaviors can independently affect patient outcomes. Each process indicator measures the proportion of at-risk patients who are given the targeted interventions. For example, all hospitalized patients are expected to be assessed for fall-risk factors within 24 hours of admission, and at-risk patients are expected to receive risk-targeted interventions within 24 hours of risk identi cation.

Data Collection
As a pre-intervention period, patient outcome data of 16 months before the experiment were collected from the hospital's quality-assurance department to provide a baseline reference for comparison. However, the fall-injury rates before this experiment was not comparable due to differences in criteria used to calculate only severe level injuries as a sentinel event at the hospital. For process indicators, 1-month of data before the experiment were collected as a baseline. During the study period, data on patient demographics and medications, nursing activities, the STRATIFY, and administrative information were collected from the EHR system and falls data from the hospital's quality-assurance department. To monitor and minimize the underreporting rate which was observed in our previous work (33,34), the nursing department conducted education to all units to remind them about the principle of reporting and documentation and provided monthly chart reviews and feedback.

Sample Size and Statistical Analysis
This study hypothesis was that the fall rate will be reduced by 15% during the 24-month implementation of the prediction program. We estimated the required sample size conservatively based on previous research (14). The estimation presumed a fall rate in the control group of 2.0 per 1000 HD, an average of 15,000 HD per unit over 12 months, and an average 1,700 admissions. The required number of falls in the control group was calculated while assuming a Poisson distribution as D 0 = z 2 (θ + 1)/θ(log e θ) 2 (7). We applied a rate ratio (θ) of 0.85 and z = 2.0. Detecting a rate ratio of 0.85 between groups at the 5% signi cant level with a statistical power of 80% required 610 falls, which corresponded to a 24-month period for the 12 units.
The participant characteristics were compared using chi-square tests for categorical variables and t-tests for continuous variables. T-tests to compare rates of falls and fall injuries between the two groups and a longitudinal analysis of cluster-level data were used; generalized least squares (GLS) estimation to examine the association between group and outcomes, and regression with a xed-effect model using least-squares dummy-variable method. To compare rates of falls pre-and post-intervention periods by group, a cluster-level interrupted time series study with calendar month as the unit of time was performed.
This quasi-experimental design accounts for temporal trends while examining the association between introduction of the analytic tool and outcomes; these associations were analyzed using segmented regression analysis (35). We t negative binomial models; we included 3 variables to measure the relationship of time and the rates of falls: (i) a continuous variable to represent the underlying temporal trends; (ii) a dummy variable for the date after May 1st, 2017, to determine the change in rate of falls related to the intervention; and (iii) a continuous time variable beginning on that date, to represent the change in slope. The coe cients of the second and third variables indicated whether the intervention had an immediate or ongoing effect on rate of falls, respectively. For the patient-level analysis, adjusted logistic regression was used. The process indicators were also compared between groups using chi-square test.

Patient Characteristics
characteristics differed signi cantly between the two groups ( Table 1). The control units were characterized by older patients, a longer stay, fewer female patients, more falls, and more patients with a fall history at admission, comorbidity status, and surgical procedures. Regarding the primary diagnosis, about half of the patients in the intervention group had respiratory or digestive disease, or any form of cancer, while control patients had a greater diversity of diagnoses. These signi cant patient characteristics were used to control the patient-level differences for group effect on rates of falls.  Data are odds (95% con dence interval) values, * p < 0.05 In the patient-level analysis adjusted patient characteristics, the overall group effect was signi cant (odds ratio = 0.52, 95% CI = 0.42 to 0.65, P < 0.0001). The group effect occurred in the time periods other than the rst and last 6 months ( Table 3). Data are adjusted odds ratio (95% con dence interval) Int. and Cnt. stands for intervention group and control group † stands for age, bone health, anti-coagulants, and current surgery ‡ Function that was performed automatically by the analytic tool for predicting the fall risk § Data collection was not categorized in detail, ns Not signi cant, * < .05, ** < .0001 When we looked at the changes in frequency of nursing activities, the assessment in the intervention group increased from 1.05 at baseline to 49.92 6 months later. (Fig. 4) There was a slight increase from 9.85 at baseline to 12.95 during the same period in the control group. However, the trends reversed during the last period, with the values in the two groups becoming closer. The interventions provided to patients was markedly different between the two groups at baseline. It increased continuously in intervention group as well as control group.

Discussion
The introduction of a patient-level electronic analytic tool reduced the rates of falls, but the margin was small with notable differences in patient characteristics between the two groups. The analytic tool approach was feasible and acceptable to nurses and contributed to the completion of fall-risk assessments and to improvement of process outcomes for risk-targeted interventions recommended by clinical guidelines. It had no effect on rates of fallrelated injuries compared with usual care. Considering the limitations of a single site approach, these results imply that the electronic fall-predictive analytic tool may have potential to help nurses make informed clinical decisions and manage care time e ciently about prioritizing to whom and what intervention should be planned and conducted.
Several factors need to be considered given the mixed results for the outcomes of this study. The rst one is related to the underreporting rates of falls which were considered common for adverse events and observed 3 years ago among several units at the site (33,34). This might affect the results of interrupted time series analysis by underestimating the rates of falls at the pre-intervention periods which led to dilute intervention effects. During the study period, the underreporting rates ranged 15 ~ 20% for both groups. The second factor to consider is the introduction of a new documentation screen in the EHR system to both groups. This new screen was a list of care plan interventions organized by risk factors and it might also have affected fall prevention practices in the control group. Even though it was a static screen, not tailored to patient-level, it may have served as a reminder of guidelines to nurses and increased appropriate preventative interventions which also contributed to dilution of this study's intervention effects.
The third factor is two unexpected occurrences at the hospital. One month after study initiation, one nursing unit in each group moved to a new location and nurse sta ng was reorganized due to physical reconstruction of the hospital buildings. The fall rate increased markedly for several months in the intervention unit compared to the other ve units in the group. However, the control group unit showed only slight changes compared to those in the other units. The relocations were accompanied by changes in staff nurses and the patients' medical diagnoses, which might have increased the burden on nurses and induced the sudden increase in the fall rate at the unit. The other occurrence was the mandatory routinization of hourly nursing rounds to all inpatients by the policy of the hospital's safety committee during the last 6-month period, which may account for the sudden increase of nursing assessments in the control group.
These factors imply that there were strong contamination and confounding during this experiment, which might lead to some differences between the two groups for fall rates. The nature of low prevalence of inpatient falls of about 2.0 per 1000 HD and single site approach required long term period of time and we were unable to control for some contamination and confounding. One previous cluster randomized controlled trial (RCT) intervention study (36) showed a signi cant intervention effect over the 6-month intervention period and reported 3.15 and 4.18 rates of falls in the intervention and control units respectively.
Another cluster RCT conducted over a 12-month period in Australia reported a fall prevalence of 3.05. One current US intervention study (37) reported 6-month intervention period with 2.80 prevalence. Compared to these studies, the study site had the lowest prevalence and longest intervention period.
Despite the contamination and confounding, some improvement in the primary outcome was observed. This result suggests that a multi-site RCT in needed in the future to provide more rigorous evidence. In addition, if the intervention is combined with dynamic decision support features in an EHR system such as guiding intervention tailored to patient-level speci c risk factors, we could expect additional improvement (37).
As for the nurses' behaviors on the intervention units, we saw rapid improvement in nursing assessments in the intervention group in the rst period of the study in contrast with limited improvement with the control group despite similar levels at baseline. The participating nurses commented that they payed additional attention to assessment because of concerns that the IN@SIGHT might miscalculate the probability of falls due to lack of data (30). This nding can be understood by the nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) framework (38) illuminating the success or failure of technology-supported health or social care programs. The framework's domain of adopter system (e.g., staff and nurses) often do not engage with a program or use the technology due to concerns about threats to their scope of practice or to the safety and welfare of the patient, which leads to rst gathering additional information on risks. A qualitative exploration study (20) reported similar ndings when investigating how nurses perceive predictive information and how they act on it. The study reported that nurses attempted to gather more information from additional sources during uncertainty and desired to review more detailed information underlying predictions. Considering that predictive information is relatively new to nurses, these changes are positive and desirable in terms of both quality of care and quality of data in EHRs.
With regard to the process indicators, the results showed that the IN@SIGHT intervention improved the completeness of risk assessment and injury-risk factor assessment. However, the control unit nurses used to provide universal fall precautions more consistently to patients than nurses on the intervention units. In fall prevention, universal fall precautions should be provided to all inpatients to prevent accidental falls. Considering that accidental falls are responsible for about 14% of inpatient falls (39), the control group was more likely to show better patient outcomes. The multifactor interventions of education, communication, and environment were also delivered more frequently to at-risk patients of the control group. These ndings imply that the control group practice was better than the intervention group from the baseline and the analytic tool did not affect universal fall precaution care. While, as for the risktargeted interventions, the intervention group showed slow, better improvement than the control group, and the difference became signi cant in the last time period. This nding is meaningful considering that risk-targeted interventions prevent anticipated physiologic falls which are responsible for up over 70% of inpatient falls (34,39).
The design of this study had several limitations. First, the implementation at a single site for long study period introduced a lot of confounding and limited the generalizability of the study ndings. To control for hospital-level structural variables, we selected control units according to similar characteristics of the patients, but there were still some important differences. We compared the underlying trends using an interrupted time series and adjusted for patient-level risks, but patient differences on control and intervention units may have affected the study ndings. Second, we were unable to compare the change in injury fall rate trends at pre-and post-intervention periods. Third, other quality initiatives may have been implemented at the hospital during the study period that affected both the nurse throughput and the appropriate use of care resources.
Despite these limitations, this study has a strong point which may be considerable for future outcome studies. We traced and monitored the changes of relevant nursing activities provided to patients with process measures over time. Using the measures, we could witness the slow, but explicit qualitative changes in nursing interventions of the intervention group rather than quantitative changes, which indicates that the processes underlying care elements have improved, and improvements in patient outcomes would follow (40). The continuous measurement and analysis of process outcomes were informative to infer the effect of interventions to patient outcomes as well as interpret the effect of confounding, which had been rarely taken into account at existing studies (7,36,41). This study's approach and methods suggested what process outcomes and how those could be measured, reported, and interpreted.

Conclusions
Our new electronic analytic tool for predicting inpatient fall risk demonstrated the potential to contribute to reducing the fall rate among hospitalized patients and leading to positive, qualitative changes in process outcomes. Nurses were amenable to using the tool, and hospital managers used the tool to make informed decisions aimed at preventing falls. As an example of nursing predictive analytics application, de ned as the use of electronic algorithms that forecast patient events in near-real time and point-of-care to improve outcomes and reduce costs, this study showed how we can use available EHR data to improve nursing sensitive outcomes. The e cacy of the tool could be maximized by combining it with e cient, tailored interventions for preventing falls as well as continuously updating it according to changes in user behaviors. Conceptual framework of this study