Objective To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to externally validate patient-level prediction models at scale.
Materials & Methods Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks and a network study was run that enabled the five models to be externally validated across nine datasets spanning three countries and five independent sites.
Results The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and their performances in predicting stroke within 1 year of initial atrial fibrillation diagnosis for females were comparable with existing studies. The validation network study took 60 days once the models were replicated and an R package for the study was published to collaborators at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation.
Discussion This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability and reproducibility of prediction models, but without collaborative approaches it can take three or more years to be validated by one independent researcher.
Conclusion In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months.
This is a list of supplementary files associated with this preprint. Click to download.
Loading...
On 23 Apr, 2020
On 17 Jul, 2019
On 22 Apr, 2020
On 20 Apr, 2020
On 19 Apr, 2020
On 19 Apr, 2020
On 01 Apr, 2020
Received 31 Mar, 2020
On 27 Mar, 2020
Invitations sent on 23 Jan, 2020
On 20 Jan, 2020
On 19 Jan, 2020
On 19 Jan, 2020
Posted 19 Jul, 2019
On 19 Dec, 2019
Received 15 Dec, 2019
On 26 Nov, 2019
Received 09 Nov, 2019
On 25 Oct, 2019
Invitations sent on 28 Sep, 2019
On 17 Jul, 2019
On 10 Jul, 2019
On 09 Jul, 2019
On 09 Jul, 2019
On 23 Apr, 2020
On 17 Jul, 2019
On 22 Apr, 2020
On 20 Apr, 2020
On 19 Apr, 2020
On 19 Apr, 2020
On 01 Apr, 2020
Received 31 Mar, 2020
On 27 Mar, 2020
Invitations sent on 23 Jan, 2020
On 20 Jan, 2020
On 19 Jan, 2020
On 19 Jan, 2020
Posted 19 Jul, 2019
On 19 Dec, 2019
Received 15 Dec, 2019
On 26 Nov, 2019
Received 09 Nov, 2019
On 25 Oct, 2019
Invitations sent on 28 Sep, 2019
On 17 Jul, 2019
On 10 Jul, 2019
On 09 Jul, 2019
On 09 Jul, 2019
Objective To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to externally validate patient-level prediction models at scale.
Materials & Methods Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks and a network study was run that enabled the five models to be externally validated across nine datasets spanning three countries and five independent sites.
Results The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and their performances in predicting stroke within 1 year of initial atrial fibrillation diagnosis for females were comparable with existing studies. The validation network study took 60 days once the models were replicated and an R package for the study was published to collaborators at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation.
Discussion This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability and reproducibility of prediction models, but without collaborative approaches it can take three or more years to be validated by one independent researcher.
Conclusion In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months.
This is a list of supplementary files associated with this preprint. Click to download.
Loading...