In this retrospective study, we aimed to assess the value of a fully automated AI tool to accurately quantify CAC in patients undergoing non-contrast free-breathing ungated CT as part of an oncologic 18F-FDG PET/CT examination.
Our data indicate, that the AI tool manages to detect and quantify CAC with relatively high accuracy without requiring any user input. However, AI-CACS from ungated CT generally underestimates CAC burden, which should be kept in mind if further diagnostic workup for CAD are considered to recommend. Nonetheless, this study further provides evidence that CAC scores can be extracted effortlessly from various types of CT scans, thus potentially expanding the diagnostic value and impact of a given examination.
Following a recent consensus statement from the British societies of cardiovascular imaging/cardiac computed tomography and thoracic imaging, physicians are urged to report incidental coronary calcifications on all CT scans covering the chest as CAC is an important marker of CAD in both symptomatic and asymptomatic patients . Specifically, CAC is associated with a poorer prognosis in various patient groups, including cancer patients. Notably, a sub-analysis of the National Lung Screening Trial showed that CAC scores of > 100 were associated with a four to sevenfold increase in mortality risk as compared to patients without CAC . For CAC grading, the authors recommend using a semi-quantitative ordinal scoring system instead of the conventional quantitative Agatston scoring system. While the authors acknowledge that the Agatston scoring system represents the gold standard assessment for CAC, they point out that the additional time effort, and use of dedicated software may prevent physicians from implementing and performing CACS on non-dedicated CT scans, as in the case of PET/CT imaging .
In the current study, we present a viable approach that enables physicians to extract quantitative Agatston scores from ungated CT scans as acquired for attenuation correction of oncologic PET scans. This AI tool runs fully automatically without any further user input and generates a detailed CACS report that can directly be sent to the user or to the institutions’ PACS. Notably, the tool has previously been validated by Vonder et al., who tested the tools’ performance relative to manual CACS measurements in a cohort of 997 patients who had undergone a dedicated cardiac CT protocol including calcium scans as part of a cardiovascular screening program. The authors found an interscore agreement of 0.958 and an interclass agreement of 0.96 for risk categories thus confirming the AI tools ability to perform CACS accurately on dedicated cardiac calcium CT scans .
In contrast, we found an interscore agreement of 0.88 and an interclass agreement of 0.800 for risk categories. In this regard, it should be noted that dedicated calcium scans are performed in breath-hold and with ECG-gating. This is not the case for the CT acquired during PET/CT examinations and may therefore significantly impact CAC quantification accuracy. Specifically, it has been shown that the calcium load can be underestimated on ungated CT scans [20–23]. Furthermore, it should be noted that the acquisition and reconstruction parameters of the CT scan from PET/CT imaging may differ from those recommended for a dedicated cardiac calcium scan. For example, the latter should be performed at 120 kV and should be reconstructed with weighted filtered back projection [5, 7].
Despite these differences and challenges, our AI tool achieved a good performance in detecting and quantifying CAC. Specifically, while reclassification of risk categories frequently occurred (44% of cases), risk categories nearly always only (i.e., 89%) shifted by two category. Furthermore, in nearly all cases, the risk category was underestimated by the AI tool, which may partially be owed to the inherent limitations of an ungated CT scan for CAC detection. This should be kept in mind nonetheless, so recommendation for further workup of potential CAD when using AI-CACS on CT scans from PET/CT scans should rather generously be made by physicians.
Lastly, we would like to emphasize that we did not further optimize the AI tool prior to study onset by performing any specific or further training on our dataset. Thus, the data used in the current study represents a true validation set. In this regard it, should be noted that the performance of the AI tool may be further improved in the future by training the algorithm with further study/institution specific data.
Our study has the following limitations: First, this was a retrospective single-center study with a limited number of subjects. Importantly it should be acknowledged that the results inherently depend on the examined patient cohort. Here, we used a unique and heterogeneous patient cohort of oncologic patients with scans ranging back to 2007. Incidentally, the AI tool may provide even better results when using more recent scans and scans from a more homogenous patient cohort (performed on more modern scanners). Second, we did not perform manual CACS on the CT scans from PET/CT imaging. This would have allowed us to better quantify the measurement inaccuracy of the AI tool itself. This should be investigated in future studies. Third, as a reference standard, we used manual CACS scores from a dedicated cardiac SPECT-MPI examination performed within 6 months of the oncologic PET/CT. CAC scores are not expected to change within this time frame, nevertheless, it should be acknowledged that minor changes may have occurred, thus potentially introducing a bias.
In conclusion, our study indicates that an AI tool allows for the fully automatic extraction of CAC scores from ungated CT scans as acquired in patients undergoing oncologic 18F-FDG PET/CT. This allows physicians to extract CAC scores effortlessly from oncologic 18F-FDG PET/CT examinations, thus enabling an opportunistic screening of CAD and allowing for the further expansion of the diagnostic spectrum and value of the imaging modality.