Model uncertainty and decision making: Predicting the Impact of COVID-19 Using the CovidSim Epidemiological Code
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) virus has rapidly spread worldwide since December 2019, and early modelling work of this pandemic has assisted in identifying effective government interventions. The UK government relied in part on the CovidSim model developed by the MRC Centre for Global Infectious Disease Analysis at Imperial College London, to model various non-pharmaceutical intervention strategies, and guide its government policy in seeking to deal with the rapid spread of the COVID-19 pandemic during March and April 2020. CovidSim is subject to different sources of uncertainty, namely parametric uncertainty in the inputs, model structure uncertainty (i.e., missing epidemiological processes) and scenario uncertainty, which relates to uncertainty in the set of conditions under which the model is applied. We have undertaken an extensive parametric sensitivity analysis and uncertainty quantification of the current CovidSim code. From the over 900 parameters that are provided as input to CovidSim, we identified a key subset of 19 parameters to which the code output is most sensitive. We find that the uncertainty in the code is substantial, in the sense that imperfect knowledge in these inputs will be magnified to the outputs, up to the extent of ca. 300%. Most of this uncertainty can be traced back to the sensitivity of three parameters. Compounding this, the model can display significant bias with respect to observed data, such that the output variance does not capture this validation data with high probability. We conclude that quantifying the parametric input uncertainty is not sufficient, and that the effect of model structure and scenario uncertainty cannot be ignored when validating the model in a probabilistic sense.
Figure 1
Figure 2
Figure 3
Due to technical limitations, full-text HTML conversion of this manuscript could not be completed. However, the latest manuscript can be downloaded and accessed as a PDF.
This is a list of supplementary files associated with this preprint. Click to download.
Supplementary Material
Great research, Chaos Theory at it best.... And even if your model(s) was correct 90% of the time, you still have to get people to use it, which seems to be as complex as your model :) Still with the most amazing computers for weather forecasting the number of days we can predict the weather (80% chance), is still only better by a couple days, vs in the 1960's. From NOAA: A seven-day forecast can accurately predict the weather about 80 percent of the time and a five-day forecast can accurately predict the weather approximately 90 percent of the time. However, a 10-day—or longer—forecast is only right about half the time. The National Weather Service has been verifying forecast accuracy since the 1960s, and now five-day forecasts are as reliable as the three-day forecasts in the 1980s. Short-range forecasts have also improved by a day; that is, today's Day 2 forecast is as accurate as the Day 1 forecast in the past.
Model uncertainty and decision making: Predicting the Impact of COVID-19 Using the CovidSim Epidemiological Code
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) virus has rapidly spread worldwide since December 2019, and early modelling work of this pandemic has assisted in identifying effective government interventions. The UK government relied in part on the CovidSim model developed by the MRC Centre for Global Infectious Disease Analysis at Imperial College London, to model various non-pharmaceutical intervention strategies, and guide its government policy in seeking to deal with the rapid spread of the COVID-19 pandemic during March and April 2020. CovidSim is subject to different sources of uncertainty, namely parametric uncertainty in the inputs, model structure uncertainty (i.e., missing epidemiological processes) and scenario uncertainty, which relates to uncertainty in the set of conditions under which the model is applied. We have undertaken an extensive parametric sensitivity analysis and uncertainty quantification of the current CovidSim code. From the over 900 parameters that are provided as input to CovidSim, we identified a key subset of 19 parameters to which the code output is most sensitive. We find that the uncertainty in the code is substantial, in the sense that imperfect knowledge in these inputs will be magnified to the outputs, up to the extent of ca. 300%. Most of this uncertainty can be traced back to the sensitivity of three parameters. Compounding this, the model can display significant bias with respect to observed data, such that the output variance does not capture this validation data with high probability. We conclude that quantifying the parametric input uncertainty is not sufficient, and that the effect of model structure and scenario uncertainty cannot be ignored when validating the model in a probabilistic sense.
Figure 1
Figure 2
Figure 3
Due to technical limitations, full-text HTML conversion of this manuscript could not be completed. However, the latest manuscript can be downloaded and accessed as a PDF.
Great research, Chaos Theory at it best.... And even if your model(s) was correct 90% of the time, you still have to get people to use it, which seems to be as complex as your model :) Still with the most amazing computers for weather forecasting the number of days we can predict the weather (80% chance), is still only better by a couple days, vs in the 1960's. From NOAA: A seven-day forecast can accurately predict the weather about 80 percent of the time and a five-day forecast can accurately predict the weather approximately 90 percent of the time. However, a 10-day—or longer—forecast is only right about half the time. The National Weather Service has been verifying forecast accuracy since the 1960s, and now five-day forecasts are as reliable as the three-day forecasts in the 1980s. Short-range forecasts have also improved by a day; that is, today's Day 2 forecast is as accurate as the Day 1 forecast in the past.