Machine Learning-based Derivation and External Validation of a Tool to Predict Death and Development of Organ Failure in Hospitalized Patients with COVID-19

Background COVID-19 mortality risk stratification tools could improve care, inform accurate and rapid triage decisions, and guide family discussions regarding goals of care. A minority of COVID-19 prognostic tools have been tested in external cohorts. Our objective was to compare machine learning algorithms and develop a tool for predicting subsequent clinical outcomes in COVID-19. Methods We conducted a retrospective cohort study that included hospitalized patients with COVID-19 from March 2020 to March 2021. 712 consecutive patients from University of Washington (UW) and 345 patients from Tongji Hospital in China were included. We applied three different machine learning algorithms to clinical and laboratory data collected within the initial 24 hours of hospital admission to determine the risk of in-hospital mortality, transfer to the intensive care unit (ICU), shock requiring vasopressors, and receipt of renal replacement therapy (RRT). Mortality risk models were derived, internally validated in UW and externally validated in Tongji Hospital. The risk models for ICU transfer, shock and RRT were derived and internally validated in the UW dataset. Results Among the UW dataset, 122 patients died (17%) during hospitalization and the mean days to hospital mortality was 15.7 +/− 21.5 (mean +/− SD). Elastic net logistic regression resulted in a C-statistic for in-hospital mortality of 0.72 (95% CI, 0.64 to 0.81) in the internal validation and 0.85 (95% CI, 0.81 to 0.89) in the external validation set. Age, platelet count, and white blood cell count were the most important predictors of mortality. In the sub-group of patients > 50 years of age, the mortality prediction model continued to perform with a C-statistic of 0.82 (95% CI:0.76,0.87). Mortality prediction models also performed well for shock and RRT in the UW dataset but functioned with lower accuracy for ICU transfer. Conclusions We trained, internally and externally validated a prediction model using data collected within 24 hours of hospital admission to predict in-hospital mortality on average two weeks prior to death. We also developed models to predict RRT and shock with high accuracy. These models could be used to improve triage decisions, resource allocation, and support clinical trial enrichment.


Introduction
The ongoing COVID-19 pandemic, caused by human infection with SARS-CoV-2, has been a major cause of mortality worldwide [1]. A robust public health and biomedical response to a pandemic is contingent on timely and accurate information, including rapid diagnosis and assessment of patients at risk for severe disease [2]. A clinical model, incorporating recognized risk factors and clinical features, that could effectively identify individuals at risk for severe disease and adverse clinical outcomes could greatly assist with rational triage and resource allocation [3]. Sequential Organ Failure Assessment (SOFA) score has been widely used to assist with triage of patients with COVID-19. However, the accuracy of SOFA for predicting mortality in COVID-19 is poor (AUC of 0.59 (95% CI, 0.55 -0.63), possibly because SOFA was developed in patients with various and alternative forms of sepsis [4]. While multiple papers have focused on the development of prognostic models to predict mortality risk using demographic and clinical data, these papers have had limited validation in external patient cohorts [5][6][7][8][9]. For example, one prediction model that used three blood biomarkers initially reported a 90% accuracy to predict mortality. However, when this model was tested in an external cohort, reported accuracy declined to only 40-50% [10,11]. Previous COVID-19 prediction models have been limited in reporting how features were selected, timing of variable collection and outcomes and calibration performance of the model [5,6].
To date, COVID-19 prediction models have largely focused on mortality [5,12,13], rather than risk for speci c organ dysfunction, such as hypotension requiring vasopressors (shock), renal failure requiring renal replacement therapy (RRT), or hypoxemic respiratory failure requiring invasive mechanical ventilation. An accurate means to predict risk for speci c organ injury in severe COVID-19 would greatly assist clinical decision-making. Studies have attempted to assess such risks by grouping several outcomes of interest together and building a predictive model [13][14][15][16]. Despite the success of this kind of model, grouping the outcomes together is less useful for resource allocation and triage, as patients will require different equipment and sta ng expertise depending on their disease course and complications [3,17]. To address this concern, we created separate models to predict risk of in-hospital mortality, ICU transfer, shock, and renal replacement therapy (RRT) based on demographic and clinical information collected on the rst day of hospital admission. We then used an open source COVID-19 dataset to validate our mortality prediction model. Additional outcomes, such as ICU transfer, shock and need for RRT, were not available in the external validation set.

Study design and patient population
The University of Washington (UW) dataset includes demographic and clinical data from COVID-19 positive patients who were admitted to two hospitals at the UW (Montlake and Harborview campuses) between March, 2020 and March, 2021.
The COVID-19 dataset at Tongji Hospital dataset has been previously reported [6]. In brief, patients from the Tongji COVID-19 dataset were enrolled from January 10 th to February 18 th , 2020. Patients from the Tongji dataset made the external validation cohort for the mortality model. In the UW and Tongji datasets, mortality prediction models were developed using clinical data collected on the rst day of hospital admission.

Ethics approval and consent to participate
The University of Washington institutional review board (IRB) approved the study protocol (STUDY10159). All clinical investigations were conducted based on the principles expressed in the declaration of Helsinki. Written informed consent was waived by the IRB due to the retrospective nature of our study of routine clinical data.

Outcomes
The primary outcome was in-hospital mortality. We developed and internally validated a prediction model for in-hospital mortality and externally validated the model in the Tongji dataset. Secondary outcomes were ICU transfer, shock and receipt of RRT. These secondary outcomes were missing in the Tongji dataset and so we developed and cross-validated prediction models for secondary outcomes using the UW dataset.

Feature Selection
The following steps were taken for feature selection. First, features were dropped if >10% of the values were missing. Second, nearzero variance features were removed, as these features almost exclusively had one unique value. Third, pair-wise correlations between all the features were calculated. If two features had a correlation larger than 0.8, the feature with a larger mean absolute correlation was dropped. Fourth, missing values were replaced by the mode if the variable was categorical or by the median otherwise. Finally, all the continuous variables were standardized.
Twenty features overlapped between the UW and Tongji Hospital datasets, and these 20 features were used for the mortality prediction model. All clinical and laboratory data were abstracted from the medical record within the rst day of hospital admission, and patients were included in the analysis for each outcome only if the patients did not have the outcome on the rst day of hospitalization. An individual prediction model was developed for each of the outcomes.

Data partitioning, UW dataset
We randomly split the UW dataset into development and internal validation sets by strati ed sampling. The training set included 475 patients, and the internal validation set included 237 patients. First, we trained models on the training set, and then selected the best model by its performance on the internal validation set. Top models for in-hospital mortality were then tested in the external validation set. We performed cross validation in the internal validation set for the three prediction models for ICU transfer, shock and RRT. We used the UW dataset as follows (1) patients were randomly split into 10 folds in a strati ed fashion using the outcome variable; (2) the model was trained using nine of the ten folds and validated on the remaining fold. The procedure was repeated ten times until each fold had been used as a validation fold exactly once; (3). Steps 1 and 2 were repeated ten times.

Machine learning models
Least absolute shrinkage and selection operator (LASSO) logistic regression is a logistic regression approach with L1 penalties [18]. The L1 penalty terms encourage sparsity, thus preventing over tting and yielding a small model. A weighted LASSO logistic regression was used to handle the imbalanced data. The hyperparameter lambda was selected by strati ed 10-fold cross validation.
Elastic net logistic regression (LR) is an approach that combines (LASSO LR and ridge logistic regression, incorporating both L1 and L2 penalties [19]. It can generate sparse models which outperform LASSO logistic regression when highly correlated predictors are present. The hyperparameters alpha and lambda were selected by strati ed 10-fold cross validation. eXtreme Gradient Boosting (XGBoost). XGBoost is a gradient boosted machine (GBM) based on decision trees that separate patients with and without the outcome of interest using simple yes-no splits, which can be visualized in the form of decision trees [20]. GBM builds sequential trees, such that each tree attempts to improve model t by more highly weighting the di cult-to-predict patients. The following hyperparameter settings were applied: nrounds =150, eta = 0.2, colsample_bytree = .9, gamma = 1, subsample=.9 and max_depth=4.

Class imbalance handling
A weighted version of each of the three above methods was used to handle imbalanced data. For example, if there were 90 positives and 10 negatives, then a weight of 10 over 90 was assigned to a positive sample and a weight of one was assigned to a negative sample.

Probability calibration
Isotonic regression was used to calibrate the probabilities outputted by the machine learning models [21]. The calibration model was tted on the training samples only.

Model comparison
We tested the three machine learning methods (LASSO LR, elastic net LR, and XGBoost) independently to predict each outcome. Model performance was compared using the area under the receiver operatory characteristic curve (AUC) and 95% CI [22,23]. Top performing models for in-hosptial mortality in the internal validation cohort were then carried forward to the external validation cohort. Two-sided p-values <0.05 were considered statistically signi cant. All models were developed using R.

Patient characteristics
A total of 1,057 patients were included in the analysis, 712 from UW and 345 from Tongji Hospital. Baseline characteristics for patients who died vs survived are shown in Table  (46%) died. [24]. We applied three machine learning methods (LASSO LR, elastic net LR and XGBoost) to the training set and evaluated the model performance in the interval validation set. Elastic net LR model had the highest AUC in the internal validation set (0.72, 95% CI: 0.64 to 0.81) for in-hospital mortality. Next, we tested the elastic net LR model in the external validation cohort, and obtained an AUC of 0.85 (95% CI: 0.81 to 0.89) for in-hospital mortality ( Figure 1A and 1B and Table 2A). The top 3 variables in the in-hospital mortality prediction model included, age, minimum platelet count, and maximum white blood cell count (Figure 2A). Partial dependence plots for the most important continuous variables in elastic net LR are shown in Figure 3A. Older age was associated with a linear increase in mortality. In contrast, platelet count showed a relatively at risk pro le up to 500 x 10 9 /L after which risk of death increased linearly with lower platelet counts. The predicted risk of in-hospital mortality compared with the observed risk was well calibrated in the test set (Supplement Figure 1). In Table 3A, we provide the sensitivity, speci city, positive predictive values (PPV) and negative predictive values (NPV) across the three different cohorts for in-hospital mortality. We found that the model thresholds can be personalized to either maximize PPV or NPV. We found in the external validation cohort that the in-hospital mortality models had a maximum PPV and NPV of 0.84 or higher. Model coe cients are provided in Table S1 for future validation in diverse patient cohorts.
To better understand the association between clinical features and in-hospital mortality, we concentrated on patients > 50 years of age and re-trained the models excluding age. Elastic net LR model had the highest AUC in the internal validation set (0.73, 95% CI: 0.61 to 0.84) for in-hospital mortality. Next, we tested the elastic net LR model in the external validation cohort and obtained an AUC of 0.82 (95% CI:0.76,0.87) for in-hospital mortality ( Figure 4A and 4B and Table 2B). In Table 3B, we provide the sensitivity, speci city, positive predictive values (PPV) and negative predictive values (NPV) across the three different cohorts for in-hospital mortality in patients > 50 years of age. Partial dependence plots for the most important continuous variables in elastic net LR are shown in Figure 3B. Platelet count, blood nitrogen urea, haematocrit and white blood cell count were the top 4 variables that predicted in-hospital mortality in the patients > 50 years of age.
Machine learning models for secondary outcomes We next developed and cross-validated prediction models for ICU transfer, shock and receipt of RRT. For the outcome of ICU transfer, 419 patients from the UW dataset were included in the training set, with 45 (11%) patients being transferred to the ICU within 28 days of admission. Patients were excluded from this analysis who were transferred to the ICU within the rst day of hospitalization.
The mean length of time to be transferred to ICU was 7.6 (standard deviation 9.1) days. Lasso LR achieved the highest AUC (0.60, 95% CI: 0.52,0.68) for prediction of ICU transfer compared with the other two methods (elastic net LR, XGBoost) ( Figure 5A and Table 4). The two predictors that most strongly correlated with subsequent ICU transfer were age and minimum SpO 2 .
For the outcome of shock, 606 patients from the UW dataset were included in the training set, with 67 patients developing shock within 28 days of admission. Patients were excluded from this analysis who had shock within the rst day of hospitalization. The mean length of time to develop shock was 7.0 +/-6.5 days (mean +/-SD). Elastic net LR achieved the highest AUC of the three methods (0.76, 95% CI: 0.69 to 0.82) ( Figure 5B and Table 4). The three predictors that were most highly correlated with subsequent development of shock were ICU admission, minimum mean arterial blood pressure and minimum Glasgow coma scale score.
For the outcome of receipt of RRT, 671 patients from the UW dataset were included in the training set with 24 patients receiving RRT within 28 days of admission. Patients were excluded from this analysis who received RRT within the rst day of hospitalization. The mean length of time to receive RRT was 5.8 +/-7.2 days (mean +/_ SD). As shown in Figure 5C and Table 4, Lasso LR achieved a slightly higher mean AUC compared with the other two methods (0.88, 95% CI: 0.79 to 0.98). The predictor that most strongly in uenced need for RRT was minimum serum creatinine. Variable importance plots for all the secondary outcomes can be found in Figure S2. Model calibration plots for each of the secondary outcomes are provided in Figure S1. Coe cients for variables are provided in Tables S2-S4.

Discussion
In this derivation, internal validation and external validation study of adult hospitalized patients with COVID-19, we developed and validated an in-hospital mortality prediction tool using variables that are routinely collected within 24 hours of hospital admission. We found the mortality prediction model had high accuracy to predict mortality with a 2-week lead-time. We also found that elastic net logistic regression had the highest performance and best calibration of the machine learning models tested. In addition, we derived models for ICU transfer, shock and RRT. Our results provide a simple bedside tool and highlight clinical variables that can inform triage decisions and clinical care in hospitalized patients with COVID-19.
The mortality prediction tool was derived using 20 variables and exported to an external dataset. The model had higher discrimination in the external dataset, demonstrating the generalizability of the model. Variables that informed model development included age, white blood count, and platelet count. These variables have been individually shown to be previously prognostic in COVID-19 hospitalization as well as in sepsis [25][26][27]. A machine learning study in Germany for mortality prediction in COVID-19, also found that age and markers of thrombotic activity were predictive of ICU survival [28]. An advantage of our model to other studies is that we included not only patients admitted to the ICU but all patients presenting to the hospital. This broad inclusion criteria improves generalizability of our ndings. We found that elastic net regression was the most accurate algorithm for predicting in-hospital mortality in our datasets. The value of elastic net regression machine learning algorithms is that it is interpretable. We provide the variables and the coe cients for each model to ease future testing in diverse patient cohorts.
The present machine learning models show that a reliable prediction can be made for hospital mortality and organ failure in hospitalized patients with COVID-19. The AUC for our model had a performance in the external validation set comparable to or improved than alternative COVID-19 prediction models [12,[29][30][31]. One bene t of our model is that it was developed and internally validated in a US population and externally validated in a population from China. This is in contrast to other prediction models developed in COVID-19 that are speci c to patients admitted to one healthcare system or hospitalized in one country [12,13,28,29,32,33]. The ability to validate our model in a healthcare system outside the US shows the generalizability of the model and the reproducibility of our ndings. Our ndings also demonstrate the inherent similarities in the patient response to infection and the clinical variables that are associated with poor outcomes.
This study has several strengths, including its large sample size and a discovery and validation cohort. In addition, we developed models for not only mortality but also organ speci c failure. Another strength is that the model predicted outcomes up to 2 weeks prior to the outcome occurring. This lead time is essential to help inform clinical care and provide a window when therapeutics can be tested to change eventual outcomes. Finally, all prediction models were developed using routinely collected data that is available in most electronic medical records. This allows the easy replication of our models to diverse patient cohorts. Since age is one of the strongest predictors of mortality in COVID-19, we speci cally developed in-hospital mortality prediction models in the population of patients > 50 years of age. We found that clinical biomarkers, such as platelet count, blood urea nitrogen, white blood cell count and blood urea nitrogen, in combination continued to accurately predict in-hospital mortality.
There are also several limitations to this work. First, although developed and validated in an external dataset, it is possible that our ndings may not generalize to other settings. Second, we restricted to clinical and laboratory variables collected within 24 hours of ICU admission. We restricted to these variables to develop prediction models that could be run on electronic health record data. Moreover, the variables used in the model are often not missing in the medical record and regularly collected.

Conclusions
We developed prediction models with high discrimination for mortality, shock and RRT. The in-hospital mortality model performed well in the internal validation set and showed improved accuracy in the external validation set. Key variables that informed the in-hospital mortality prediction model included age, white blood cell count and platelet count. The mortality prediction model on average was able to identify future risk of mortality 2 weeks prior to the clinical outcome. All variables to develop the prediction models used clinical variables collected within the rst day of hospital admission. These machine learning derived prediction models could be used to improve triage decisions, resource allocation, and support clinical trial enrichment in patients hospitalized with COVID-19.

Declarations
Ethics Approval: The University of Washington Institutional Review Board approved this study.

Consent for Publication:
No individual personal data is included in the study.
Availability of Data and Materials: The datasets generated during and/or analysed during the current study are not publicly available due currently ongoing research studies, but the data are available from the corresponding author on reasonable request.
Con icts of interest: There are no con icts of interest among the authors of this manuscript.  Tables   Table 1A. Features in the UW dataset strati ed by survivors and non-survivors.   Variable importance plots for mortality in all patients and in patients over 50 years of age. A. Mean SHAP values are provided on the xaxis, which shows that age, minimum platelet count, maximum white blood cell count, minimum blood urea nitrogen, maximum serum sodium, minimum haematocrit, maximum hematocrit, minimum serum creatinine, sex and minimum glucose are the top-10 variables. B. Mean SHAP values are provided on the x-axis for the mortality prediction model in patients over 50 years of age, which includes the ve selected variables: maximum platelet count, minimum blood urea nitrogen, maximum hematocrit, minimum white blood cell count, and maximum glucose. Partial dependence plots for mortality prediction model illustrating the relationship between mortality and the six top predictor variables A. Risk of mortality increases with increasing age, platelets < 500 109/L, and increasing white blood cell count. Risk of mortality increases with increasing blood urea nitrogen with an in ection point at 50 mg/dL. The risk of mortality increases with decreasing haematocrit levels and increasing sodium levels. B. Risk of mortality increases with increasing age, platelets < 500 109/L, and increasing white blood cell count. Risk of mortality increases with increasing blood urea nitrogen until 75 mg/dL and then levels off. The risk of mortality increases with decreasing haematocrit levels Receiver operator characteristics curves for ICU transfer, shock, RRT A. Receiver operator characteristics for ICU transfer in the crossvalidation cohort. B. Receiver operator characteristics for shock in the cross-validation cohort C. Receiver operator characteristics for RRT in the cross-validation cohort.