A Deep Learning Method for Estimating Patient Body Weight Using Computed Tomography Scout Images

DOI: https://doi.org/10.21203/rs.3.rs-487824/v1

Abstract

Body weight is an indispensable parameter for determination of the dose of contrast media, appropriate drug dosing, or management of radiation dose. However, we cannot always determine the accurate patient body weight at the time of computed tomography (CT) scan, especially in emergency care. In this study, based on 1831 chest and 519 abdominal CT scout images with the corresponding body weight, we developed deep learning models capable of automatically predicting body weight from CT scout images. In the assessment of model performance, there were strong correlations between the actual and predicted body weight both in chest (r = 0.955, p < 0.001) and abdominal datasets (r = 0.901, p < 0.001). Mean absolute errors were 2.75 kg and 4.77 kg for chest and abdominal datasets, respectively. The proposed method was able to predict body weight from chest and abdominal CT scout images with high accuracy. Body weight derived from our method may show potential use in determination of the dose of contrast media and management of radiation dose in adult patients with unknown body weight.

Introduction

Body weight is an important parameter in the field of radiology as the dose of contrast media[1] or the management of radiation dose[2] is strongly related to body weight. However, we cannot always determine the accurate body weight at the time of computed tomography (CT) scan, especially in emergency care. Patients in emergency care are often unresponsive and unable to state their body weight. Additionally, body weight measurement using a calibrated floor scale is limited in the emergency setting. Visual estimate of body weight by medical staff is unreliable[35], even though it is the most accessible method. A bedside method[6] with anthropometric measurements to estimate body weight was reported; however, this method yields moderate accuracy and may be time consuming.

A more accurate estimation of body weight using CT scan data has been shown in the previous studies. Geraghty et al. proposed the techniques for estimation of an individual’s height, weight, body mass index, and body surface area with a single abdominal CT image[7]. Gascho et al. developed a linear regression equation for body weight estimation based on effective mAs from CT dose modulation with whole body scan in adults[8]. These methods might be useful in the management of radiation dose or accurate drug dosing; however, their use in the determination of the dose of contrast media is limited as these methods require diagnostic CT scan data. Contrast-enhanced CT scan was frequently performed without pre-acquisition of non-contrast CT images. Therefore, developing a new approach that diagnostic CT images are not required for body weight estimation has gained interest.

In a typical CT examination, the patient is initially scanned with a “scout” or “localizer” acquisition, which is a 2D planner image acquired before diagnostic CT scan. If body weight can be predicted from CT scout images, it will allow radiologists or technologists to determine the appropriate dose of contrast media before diagnostic CT scan. Recently, deep learning has shown many encouraging results in the field of medical image processing and analysis. Particularly, a convolutional neural network (CNN) has been well suited for object detection[9, 10], semantic segmentation[11, 12], image classification[13, 14], and prediction[15, 16] tasks in radiological research. We hypothesized that body weight can be estimated from CT scout images using deep learning with a CNN. This study aimed to develop a CNN-based method for predicting body weight using chest and abdominal CT scout images and evaluate the correlation between actual and predicted body weight.

Materials And Methods

Subjects and CT acquisition

We retrospectively recruited subjects who underwent chest and/or abdominal CT for medical checkup and had accurate data on height and weight measured using a calibrated scale on the same day. For chest datasets, 1831 consecutive subjects who underwent chest CT for lung cancer screening between June to October 2020 were enrolled in this study. This study group included 1330 men and 501 women with ages ranging from 24 to 92 years (mean, 59.8 ± 11.4 years). For abdominal datasets, 519 consecutive subjects who underwent abdominal CT for visceral fat area measurement between June to December 2020 were enrolled in the study. This study group consisted of 291 men and 228 women with ages ranging from 25 to 92 years (mean, 58.1 ± 12.5 years).

CT images were acquired with an 80-detector row CT scanner (Aquilion Prime; Canon Medical Systems, Otawara, Japan). The frontal scout view of the chest and abdomen was obtained using 120 kVp and 10 mA.

This retrospective study was approved by the institutional review board of Kurashiki Central Hospital, which granted permission to use pre-acquired anonymized data, and requirement for individual informed consent was waived. All methods were carried out in line with the Declaration of Helsinki.

Datasets and preprocessing

In this study, supervised training of a CNN was performed using chest and abdominal CT scout images as input data and the corresponding body weight as reference data. The performance of our models was evaluated with other datasets that were excluded in the training datasets.

Therefore, the constructed datasets were randomly divided into training and validation sets (80%) and test sets (20%). Of 1831 chest scout images, 1464 images were used for training and validation, and the remaining 367 images were used for testing. Of 519 abdominal scout images, 415 images were used for training and validation, and the remaining 104 images were used for testing. The data augmentation technique was not used.

The scout images were converted from a Digital Imaging and Communications in Medicine (DICOM) format to Joint Photographic Experts Group format for use in the training. The window width and level of the DICOM images were used to preset values in the DICOM tag. The first step of the preprocessing was to normalize image size before feeding them to the CNN. All scout images had a constant width of 552 pixels, but image height considerably varied. Accordingly, we resized the image height to 552 pixels through a combination of preserving their aspect ratios and using zero-padding. Then, these images were resized to 224 × 224 pixels for a transfer learning of the CNN.

Deep convolutional neural network structure and training

A regression model to predict body weight was generated based on the VGG16 architecture[17], which was pretrained with the ImageNet database[18]. VGG16 consists of 13 convolutional layers and three fully connected layers, including rectified linear unit (ReLU) function and dropout. In this study, the fully connected layer of the original VGG16 was removed; then, the following three layers were added: (1) flatten layer, (2) fully connected dense layer with activation function “ReLU,” and (3) final fully connected dense layer with an output size of 1 (Fig. 1). Only the added fully connected layers were trained for creating the model. The loss function used was mean square error, and the Adam[19] optimizer was used for adjusting model weights. The initial learning rate was 0.001. The learning rate was dropped by one-tenth following every three epochs of training. The maximum number of training epochs was 40, and the batch size was 32.

A k-fold cross-validation method with k = 5 was utilized for the training and validation of the CNN model. Subsequently, the model was trained five times, where four of the five sets were used for training and the remaining set was used for validation.

The CNN models were trained under the following environment: CPU, Intel Core i7-9700F; GPU, NVIDIA GeForce RTX 2070 Super 8GB; Framework, Keras 2.3.1 with TensorFlow backend (ver.2.0.0); Language, Python 3.7.10.

Evaluation of the created models

Body weight of the test sets was predicted based on the created regression models. The average body weight derived from the five created models was calculated for each subject. Scatter plots of the actual and predicted body weight were generated. The differences between the actual and predicted body weight were calculated. Then, the mean absolute error (MAE) was calculated for chest and abdominal datasets, respectively.

Statistical analysis

Statistical analysis was performed using a free statistical software (R version 3.5.1, The R Foundation for Statistical Computing, Vienna, Austria). The correlation between the actual and predicted body weight was evaluated by calculating Pearson’s correlation coefficient. The significant level was set at a p-value < 0.05.

Results

Table 1 shows the baseline characteristics of the study subjects for chest and abdominal datasets. Of note, there are no pediatric subjects in our datasets as all subjects are aged at least 24 years for chest datasets or 25 years for abdominal datasets. This restricts the domain of our models to an adult population. The mean body weight of our datasets was 65.1 kg (range, 34.5–118.5 kg) and 65.5 kg (range, 34.5–129.7 kg) for chest and abdominal datasets, respectively.

Table 1

Summary of baseline characteristics of study subjects

Characteristics

Chest dataset

Abdomen dataset

Age, mean (range) years

59.8 (24–92)

58.1 (25–92)

Sex, male/female

1330/501

291/228

Weight, mean (SD) (kg)

65.1 (11.8)

65.5 (13.5)

Height, mean (SD) (cm)

166.0 (8.5)

163.4 (9.5)

BMI, mean (SD) (kg/m2)

23.5 (3.2)

24.4 (3.6)

BMI, body mass index; SD, standard deviation.

 

Figure 2 shows scatterplots comparing the actual and predicted body weight in test sets. There were strong correlations between the actual and predicted body weight (r = 0.955, p < 0.001 for chest; r = 0.901, p < 0.001 for abdomen).

Figure 3 shows histograms of the value of the differences between the actual and predicted body weight. The number (%) of subjects within ± 10 kg from the actual body weight were 361/367 (98.4%) for chest and 95/104 (91.3%) abdominal datasets. The MAEs were 2.75 kg and 4.77 kg for chest and abdominal datasets, respectively.

Representative cases for body weight estimation are shown in Figs. 4 and 5.

Discussion

Body weight is an indispensable parameter for determination of the dose of contrast media, appropriate drug dosing, or management of radiation dose. In this study, the predicted body weight, by applying a deep learning technique to chest and abdominal CT scout images, was found to be highly correlated with the actual body weight. Our models showed the MAE within 5.0 kg in both chest and abdominal datasets, even with a relatively modest training dataset size.

To the best of our knowledge, this is the first attempt to predict the body weight from CT scout images by applying a deep learning technique. In contrast, previous studies have required diagnostic abdominal CT images[7] or effective mAs from whole body scan data[8] for body weight estimation. This indicates that our CNN-based method can predict the patient body weight, even when non-contrast CT images do not exist. In clinical radiology, we frequently perform contrast-enhanced CT immediately after the first “scout” or “localizer” acquisition without acquiring non-contrast CT images. Thus, our method could be applicable to more cases than previous proposed methods.

Fernandes et al. reported that patients’ own weight estimates are likely to be more accurate than those of physicians or nurses, if weight measurement on an accurate scale is impractical[5]. However, patients in emergency care often have difficulty in reporting their own body weight. A bedside method using supine thigh and abdominal circumference measurements by Buckley et al. yielded greater accuracy compared to visual body weight estimates made by physicians and nurses, but deviations > ± 10 kg from measured body weight were still noted in 15% of male patients and 27% of female patients[6]. An equation based on effective mAs by Gascho et al. revealed strong correlation (r = 0.969) between measured and predicted body weight for both women and men with a postmortem interval < 4 days[8]. The present study showed that deviations > ± 10 kg from the actual body weight were noted in only 1.6% for chest and 8.7% for abdomen. The correlations between the actual and predicted body weight were strong (r > 0.9) in both the chest and abdomen. These results suggest that our CNN-based method shows potential use in predicting patient body weight accurately in the adult population with unknown body weight.

In this study, better correlation was observed in chest scout images than in abdominal scout images. One possible reason was that the dataset size of abdominal scout images was less compared to that of chest scout images. Conversely, a previous study by Fukunaga et al. has shown a similar tendency to the present study, in which a better correlation between body weight and effective diameter was found in chest CT compared to abdominal CT[2]. Surprisingly, it seems that body weight should be therefore estimated not from the abdominal region but from the chest region, if the scan range includes the chest region.

There were several limitations in our study. First, sex was not considered in creating models due to a limited number of datasets. An equation by Gascho et al. has considered sex in body weight estimation, according to the multivariate linear regression analysis[8]. Therefore, performance of our models could be improved by considering sex. Second, we only trained and tested our models on CT scout images of medical checkup subjects, and our results may not generalize to some clinical settings. The performance of our models should be assessed in patients who are unable to raise their hand or have metallic implants. Third, this was a retrospective study with the training and test sets from a single institution, and the ability of the models to generalize to CT scout images obtained at external institutions with other machines is unknown. Finally, we only created our models on chest and abdominal regions. However, we could apply our models to different scan ranges, such as neck to pelvis, chest to pelvis, and abdomen to pelvis by cropping the chest or abdominal regions.

In conclusion, our CNN-based method can predict body weight from chest and abdominal CT scout images. There would be a possibility that appropriate contrast medium dosing and CT dose management are achieved in adult patients with unknown body weight.

Declarations

Data availability

The code generated during the current study is available from the corresponding author on reasonable request. However, the image datasets presented in this study are not publicly available due to ethical reasons.

Acknowledgements

The authors would like to thank Enago (www.enago.jp) for the English language review.

Author contributions

S.I. contributed to the study design, data collection, algorithm construction, and the writing and editing of the article; M.H. carried out the data collection, and the reviewing and editing of the article; H.S. performed supervision, project administration, and reviewing and editing of the article. All authors read and approved the final manuscript.

Additional Information

Competing interests

The authors declare no competing interests.

References

  1. Bae, K. T. Intravenous contrast medium administration and scan timing at CT: considerations and approaches. Radiology 256, 32-61 (2010).
  2. Fukunaga, M. et al. CT dose management of adult patients with unknown body weight using an effective diameter. Eur. J. Radiol. 135, 109483 (2021).
  3. Hall, W. L., Larkin, G. L., Trujillo, M. J., Hinds, J. L. & Delaney, K. A. Errors in weight estimation in the emergency department: comparing performance by providers and patients. J. Emerg. Med. 27, 219-224 (2004).
  4. Menon, S. & Kelly, A. M. How accurate is weight estimation in the emergency department? Emerg. Med. Australas. 17, 113-116 (2005).
  5. Fernandes, C. M. B., Clark, S., Price, A. & Innes, G. How accurately do we estimate patients’ weight in emergency departments? Can. Fam. Physician 45, 2373-2376 (1999).
  6. Buckley, R. G. et al. Bedside method to estimate actual body weight in the Emergency Department. J. Emerg. Med. 42, 100-104 (2012).
  7. Geraghty, E. M. & Boone, J. M. Determination of height, weight, body mass index, and body surface area with a single abdominal CT image. Radiology 228, 857-863 (2003).
  8. Gascho, D. et al. A new method for estimating patient body weight using CT dose modulation data. Eur. Radiol. Exp. 1, 23 (2017).
  9. Thian, Y. L. et al. Convolutional neural networks for automated fracture detection and localization on wrist radiographs. Radiol. Artif. Intell. 1, e180001 (2019).
  10. Sugimori, H. & Kawakami, M. Automatic detection of a standard line for brain magnetic resonance imaging using deep learning. Appl. Sci. 9, 3849 (2019).
  11. Arab, A. et al. A fast and fully-automated deep-learning approach for accurate hemorrhage segmentation and volume quantification in non-contrast whole-head CT. Sci. Rep. 10, 19389 (2020).
  12. Duong, M. T. et al. Convolutional neural network for automated FLAIR lesion segmentation on clinical brain MR imaging. AJNR. Am. J. Neuroradiol. 40, 1282-1290 (2019).
  13. Fang, X., Harris, L., Zhou, W. & Huo, D. Generalized radiographic view identification with deep learning. J. Digit. Imaging 34, 66-74 (2021).
  14. Sugimori, H., Hamaguchi, H., Fujiwara, T. & Ishizaka, K. Classification of type of brain magnetic resonance images with deep learning technique. Magn. Reson. Imaging 77, 180-185 (2021).
  15. Yasaka, K., Akai, H., Kunimatsu, A., Kiryu, S. & Abe, O. Prediction of bone mineral density from computed tomography: application of deep learning with a convolutional neural network. Eur. Radiol. 30, 3549-3557 (2020).
  16. Halabi, S. S. et al. The RSNA pediatric bone age machine learning challenge. Radiology 290, 498-503 (2019).
  17. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc. 1-14 (2014).
  18. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 84-90 (2017).
  19. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc. 1-15 (2014).