We aimed to forecast the number of admissions to psychiatric hospitals before and during the COVID-19 pandemic and we compared the performance of machine learning models and time series models. This would eventually allow to support timely resource allocation for optimal treatment of patients. Model performance did not vary much between different modelling approaches before the COVID-19 pandemic. Established forecasts were substantially better than seasonal naïve forecasts. The most important features were calendrical variables that did not require short term adjustments in weekly and monthly models. However, weekly time series models adjusted quicker to the COVID-19 related shock effects than monthly and yearly models and the machine learning models.
Strength and weaknesses
A strength of our study were the data of four years from nine hospitals representing about half of all inpatient psychiatric admissions in Hesse, Germany. This allowed both to give a representative picture of inpatient psychiatric care in Germany and to show how the forecasting approaches work at different study sites. Furthermore, it was possible to analyse the effect of sudden changes in hospital admissions to the performance of different modelling approaches due to the commencement of the Corona hospital regulation in March 2020.
A limitation of our study was the lack of data to differentiate between causes of reduced hospital admissions after the corona regulation came into effect in March 2020. The reduced admissions could have been a result of different supply side and demand side effects, such as avoidance of elective admissions, reduced capacities due to isolation and quarantine requirements and unwillingness of patients to enter hospitals during the Corona crisis. Another limitation of our study was its restriction to one large German provider of inpatient mental health care, which requires a lot of care when translating to different healthcare systems or different clinical settings.
Comparison to previous research
Previous studies in the field of forecasting admissions in hospitals often focused on emergency departments  and there were no previous studies that analysed forecasting of psychiatric hospital admission comparable to our study in scale and scope.
Vollmer et al 2021 predicted admission numbers in the emergency departments of two hospitals London with data from 2011 to 2018 . They compared machine learning models to more traditional time series models to make forecasts of admissions one, three and seven days in advance. The forecasts of different time horizons, i.e., one, three and seven days in advance, performed very similar. This is comparable to our findings of relatively similar results between weekly, monthly and yearly predictions, although at a different scale. In contrast to our study, lagged admissions from previous weeks were among the strongest predictors, probably related to the stronger increase and decrease of admission number levels during the study period at the different hospitals in comparison to our study. As in our study, Vollmer et al also found that calendrical variables were among the features with the strongest influence on forecasting performance. Weather and climate data and google search data had a relatively low influence on forecasting performance.
Similar results were found by Boutsioli et al , who used a simple OLS regression to forecast hospital admissions to the emergency departments of ten public hospitals in Greece. They only used the calendrical variables weekend, summer holiday, public holiday and the participation in emergency care in their model and explained a relatively high variance in hospital admissions of up to 88%
Jones et al forecasted the admission numbers at three emergency departments in the USA  one, seven, fourteen, twenty-one and thirty days in advance. They used autoregressive integrated moving average (ARIMA) models, time series regression, exponential smoothing, and artificial neural network models to predict admissions per day. They also found that admissions were characterised by yearly and weekly seasonality (see for comparison our Fig. 1). As in our study, they found a relatively low improvement in forecasting performance in the shorter forecasting horizons in comparison to the longer horizons. Similar to our study and to the study of Vollmer et al , weather and climate had a relatively low influence on forecasting performance.
McCoy et al forecasted hospital discharge numbers at two academic medical centers in the USA . They compared the performance of a PROPHET model with a seasonal ARIMA model and a one-step naïve seasonal forecast and compared monthly models to yearly models. The best performance was achieved by a PROPHET model. Comparable to our study, they also found relatively low to none improvement of forecasting accuracy in refitting their models monthly in comparison to yearly models.