The rapid pandemic-level outbreak of coronavirus disease 2019 (COVID-19) has caused a wide range and degree of illnesses, predominated by respiratory tract infection1,2,3,4. Although most infected patients show asymptomatic or mild clinical manifestations, further investigation beyond real-time reverse transcriptase polymerase chain reaction (RT-PCR) or rapid Covid-19 tests such as chest radiographs is routinely indicated in worsening cases that require hospitalization5,6. Characteristic findings in chest radiographs of COVID-19 related pneumonia are bilateral patchy and/or confluent and bandlike ground-glass opacity or consolidation in a peripheral and mid-to-lower lung zone distribution. By contrast, several studies have found almost one-half of normal chest radiographs at initial presentation disagree with clinical symptoms7,8,9,10.
Because of its higher sensitivity, specificity, and speed, chest computed tomography (CT) has become more useful than RT-PCR in early detection, to obtain more information about chest pathology, and to evaluate the severity of lung involvement. Moreover, it can assist triage, especially when hospitalization is required but there is a shortage of healthcare personnel, inpatient beds, and medical equipment, and it may be useful as a standard modality for the rapid diagnosis of Covid-19 related pneumonia11,12,13,14,15. The chest CT findings are peripheral, bilateral, ground-glass opacity(GGO) with some round shapes with or without consolidation or intralobular lines; or reverse halo sign or other findings of organizing pneumonia16,17,18,19. The total severity score (TSS) has been proposed by Chung et al. 20 which is calculated from summation of lesions in five lung lobes and is used to categorize the severity of lung involvement and help determine the proper therapeutic management and prognosis21.
To reduce in the amount of time required for interpretation and increase in the accuracy with which lesions can be detected, deep learning has been used to efficiently analyze medical images by performing tasks such as semantic segmentation, which is image annotation at the pixel level. The outcome of classifying each pixel is a target area in an image. The U-Net model is a convolutional neural network-based model that was originally used for the semantic segmentation of biomedical images and is now one of the most utilized image segmentation techniques. The model structure is U-shaped and consists of two parts: a contracting path (encoder) and an expanding path (decoder)22. Subsequently, a U-Net model was created to support three-dimensional (3D) matrices and is called 3D-Unet23. The 3D-UNet model was used to develop a more efficient 3D imaging model for the segmentation of lesions and lung tissue24,25. Cropping the lung area before lesion segmentation can improve accuracy26. Enshaei et al.27 developed a model for predicting the lesion area of COVID-19 patients from CT-scan images, using a model to predict the lung area before the lesion regions were considered. This method enables the lesion model to predict lesions more accurately. In another study, a deep learning model was applied to lung lobe segmentation. The model is capable of accurately segmenting each lung lobe from lung CT scans28. It is also utilized in lung lobe segmentation analysis for lung segmentation research to improve segmentation accuracy in multiple diseases such as chronic obstructive pulmonary disease (COPD), lung cancer, and COVID-19 related pneumonia29.
As an example of the use of a deep learning model for computer-aided diagnostics to determine the intensity of an infection, Aswathy A. L. and Vinod Chandra S. S.30 used 3D-UNet models to segment the lung parenchyma and infected regions in lung CT scans. The models obtained accurate segmentations. Qiblawey et al.31 developed a program to classify the severity of lung CT in COVID-19 patients, using deep learning techniques to analyze the lesion area and compare it with the lung area in lung CT scan images. They calculated the percentage of infection (PI) using a U-Net model combined with pre-trained models such as ResNet and DenseNet. The residual neural network (ResNet) was first presented by He et al.32 as a solution to the vanishing gradient problem. To do this, they added a residual block (that records X values across layers for co-computation in the next layer) findings of organizing pneumonia. DenseNet was first presented by Huang et al.33 The model structure consists of dense blocks, each containing a convolutional layer. This structure enables the model to learn the features of the data. For this reason, this knowledge can be applied to lung lobe segmentation and lesion segmentation in CT scan images.
In this study, deep learning semantic segmentation was used for the lung severity scoring of COVID-19 infection. The proposed method utilizes a combination of 3D-UNet models integrated with pre-trained models, DenseNet and ResNet, to compute the PI from the lung lobe and lesion segmentation results and estimate the TSS automatically. The aim is to alleviate the radiologist's workload and time spent on imaging diagnostics as well as improve reporting accuracy.