Background To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets.
Methods Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites.
Results The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57-0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation .
Conclusion This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.
This is a list of supplementary files associated with this preprint. Click to download.
Loading...
Posted 06 May, 2020
On 23 Apr, 2020
On 17 Jul, 2019
On 22 Apr, 2020
On 20 Apr, 2020
On 19 Apr, 2020
On 19 Apr, 2020
On 01 Apr, 2020
Received 31 Mar, 2020
On 27 Mar, 2020
Invitations sent on 23 Jan, 2020
On 20 Jan, 2020
On 19 Jan, 2020
On 19 Jan, 2020
On 19 Dec, 2019
Received 15 Dec, 2019
On 26 Nov, 2019
Received 09 Nov, 2019
On 25 Oct, 2019
Invitations sent on 28 Sep, 2019
On 17 Jul, 2019
On 10 Jul, 2019
On 09 Jul, 2019
On 09 Jul, 2019
Posted 06 May, 2020
On 23 Apr, 2020
On 17 Jul, 2019
On 22 Apr, 2020
On 20 Apr, 2020
On 19 Apr, 2020
On 19 Apr, 2020
On 01 Apr, 2020
Received 31 Mar, 2020
On 27 Mar, 2020
Invitations sent on 23 Jan, 2020
On 20 Jan, 2020
On 19 Jan, 2020
On 19 Jan, 2020
On 19 Dec, 2019
Received 15 Dec, 2019
On 26 Nov, 2019
Received 09 Nov, 2019
On 25 Oct, 2019
Invitations sent on 28 Sep, 2019
On 17 Jul, 2019
On 10 Jul, 2019
On 09 Jul, 2019
On 09 Jul, 2019
Background To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets.
Methods Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites.
Results The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57-0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation .
Conclusion This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.
This is a list of supplementary files associated with this preprint. Click to download.
Loading...