Impact of sample size on the stability of risk scores from clinical prediction models: a case study in cardiovascular disease
Background: Stability of risk estimates from prediction models may be highly dependent on the sample size of the dataset available for model derivation. In this paper, we evaluate the stability of cardiovascular disease risk scores for individual patients when using different sample sizes for model derivation; such sample sizes include those similar to models recommended in national guidelines, and those based on recently published sample size formula for prediction models.
Methods: We mimicked the process of sampling N patients from a population to develop a risk prediction model by sampling patients from the Clinical Practice Research Datalink. A cardiovascular disease risk prediction model was developed on this sample and used to generate risk scores for an independent cohort of patients. This process was repeated 1000 times, giving a distribution of risks for each patient. N = 100 000, 50 000, 10 000, Nmin (derived from sample size formula) and Nepv10 (meets 10 events per predictor rule) were considered. The 5 – 95 percentile range of risks across these models was used to evaluate instability. Patients were grouped by a risk derived from a model developed on the entire population (population derived risk) to summarise results.
Results: For a sample size of 100 000, the median 5 – 95 percentile range of risks for patients across the 1000 models was 0.77%, 1.60%, 2.42% and 3.22% for patients with population derived risks of 4-5%, 9-10%, 14-15% and 19-20% respectively, for N = 10 000 it was 2.49%, 5.23%, 7.92% and 10.59%, and for N using the formula derived sample size, it was 6.79%, 14.41%, 21.89% and 29.21%. Restricting this analysis to models with high discrimination, good calibration or small mean absolute prediction error reduced the percentile range, but high levels of instability remained.
Conclusions: Widely used cardiovascular disease risk prediction models suffer from high levels of instability induced by sampling variation. Many models will also suffer from overfitting (a closely linked concept), but at acceptable levels of overfitting there may still be high levels of instability in individual risk. Stability of risk estimates should be a criterion when determining the minimum sample size to develop models.
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
This is a list of supplementary files associated with this preprint. Click to download.
Posted 14 Aug, 2020
On 09 Sep, 2020
On 10 Aug, 2020
On 09 Aug, 2020
On 08 Aug, 2020
On 08 Aug, 2020
On 24 Jul, 2020
Received 17 Jul, 2020
Received 24 Jun, 2020
On 15 Jun, 2020
On 11 Jun, 2020
Invitations sent on 10 Jun, 2020
On 08 Jun, 2020
On 07 Jun, 2020
On 07 Jun, 2020
Received 11 May, 2020
On 11 May, 2020
On 24 Apr, 2020
Received 30 Mar, 2020
On 06 Mar, 2020
On 02 Mar, 2020
On 25 Feb, 2020
On 25 Feb, 2020
Invitations sent on 25 Feb, 2020
On 24 Feb, 2020
On 24 Feb, 2020
Impact of sample size on the stability of risk scores from clinical prediction models: a case study in cardiovascular disease
Posted 14 Aug, 2020
On 09 Sep, 2020
On 10 Aug, 2020
On 09 Aug, 2020
On 08 Aug, 2020
On 08 Aug, 2020
On 24 Jul, 2020
Received 17 Jul, 2020
Received 24 Jun, 2020
On 15 Jun, 2020
On 11 Jun, 2020
Invitations sent on 10 Jun, 2020
On 08 Jun, 2020
On 07 Jun, 2020
On 07 Jun, 2020
Received 11 May, 2020
On 11 May, 2020
On 24 Apr, 2020
Received 30 Mar, 2020
On 06 Mar, 2020
On 02 Mar, 2020
On 25 Feb, 2020
On 25 Feb, 2020
Invitations sent on 25 Feb, 2020
On 24 Feb, 2020
On 24 Feb, 2020
Background: Stability of risk estimates from prediction models may be highly dependent on the sample size of the dataset available for model derivation. In this paper, we evaluate the stability of cardiovascular disease risk scores for individual patients when using different sample sizes for model derivation; such sample sizes include those similar to models recommended in national guidelines, and those based on recently published sample size formula for prediction models.
Methods: We mimicked the process of sampling N patients from a population to develop a risk prediction model by sampling patients from the Clinical Practice Research Datalink. A cardiovascular disease risk prediction model was developed on this sample and used to generate risk scores for an independent cohort of patients. This process was repeated 1000 times, giving a distribution of risks for each patient. N = 100 000, 50 000, 10 000, Nmin (derived from sample size formula) and Nepv10 (meets 10 events per predictor rule) were considered. The 5 – 95 percentile range of risks across these models was used to evaluate instability. Patients were grouped by a risk derived from a model developed on the entire population (population derived risk) to summarise results.
Results: For a sample size of 100 000, the median 5 – 95 percentile range of risks for patients across the 1000 models was 0.77%, 1.60%, 2.42% and 3.22% for patients with population derived risks of 4-5%, 9-10%, 14-15% and 19-20% respectively, for N = 10 000 it was 2.49%, 5.23%, 7.92% and 10.59%, and for N using the formula derived sample size, it was 6.79%, 14.41%, 21.89% and 29.21%. Restricting this analysis to models with high discrimination, good calibration or small mean absolute prediction error reduced the percentile range, but high levels of instability remained.
Conclusions: Widely used cardiovascular disease risk prediction models suffer from high levels of instability induced by sampling variation. Many models will also suffer from overfitting (a closely linked concept), but at acceptable levels of overfitting there may still be high levels of instability in individual risk. Stability of risk estimates should be a criterion when determining the minimum sample size to develop models.
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6