CNN models (ResNetV2, VGG19, DenseNet201, InceptionV3, MobileNetV2, GoggleNet, Xception and InceptionResNetV2) were pre-prepared with irregular instatement loads utilizing the Adam streamlining agent. For assessing the presentation of proposed deep learning classifiers, 80% of X-beam pictures including typical and sick cases are haphazardly picked for preparation, for example, 44 pictures of the dataset. The preparation parameters for all deep convolution neural system designs in this investigation are: The group dimension, learning rate, and quantity of ages were cautiously located to 3, 1e-3 plus 25, independently to accomplish the ideal assembly with scarcely any emphases on this little X-beam picture dataset, and furthermore to keep away from the corruption issue as could be expected under the circumstances. All deep system classifiers are prepared to utilize Stochastic Gradient Descent (SGD) in light of its great join and quick running time. Picture information expansion was not utilized in this examination.
Table 4
Computational times and classification accuracy of all tested deep learning models of the COVID-Net on a GPU
Classifier Name | Training Time (Sec) | Testing Time (Sec) | Accuracy (%) |
ResNetV2 | 1089.00 | 2.00 | 70 |
VGG19 | 2872.00 | 4.00 | 100 |
InceptionV3 | 1291.00 | 3.00 | 69 |
DenseNet201 | 2321.00 | 5.00 | 98 |
Xception | 2043.00 | 4.00 | 82 |
GoogleNet | 1989.00 | 5.00 | 95 |
MobileNetV2 | 372.00 | 4.00 | 60 |
InceptionResNetV2 | 1872.00 | 3.00 | 78 |
Table 4 illustrates the comparative computational times and the accuracy of tested deep learning image classifiers. The running times of all deep learning models are relatively short ranging from 372.0 to 2872.0 seconds because of using powerful capabilities of the GPU and TPU with a small X-ray image dataset. The resulted testing times of the DeepCOVID-Net models did not exceed 5 seconds on selected tested images, as shown in Figure. 8. Among all tested classifiers, the accuracy of MobileNetV2 model was the worst of 60%, while the VGG19, DenseNet201 and GoogleNet replicas achieved the best values of accuracy of 95–100%.
Table 5
prediction recital grades obtained as of changed pre-trained CNN prototypes for 5-fold cross justification process.
Training Model Name | Fold | Confusion matrix and Performance results (%) |
TP | TN | FP | FN | AC | RC | SP | PC | f-1 |
ResNetV2 | Fold − 1 | 4 | 3 | 2 | 1 | 75 | 86 | 70 | 73 | 80 |
Fold − 2 | 5 | 4 | 1 | 0 | 70 | 60 | 89 | 90 | 75 |
Fold − 3 | 5 | 4 | 1 | 0 | 62 | 66 | 89 | 90 | 73 |
Fold − 4 | 3 | 5 | 0 | 2 | 65 | 70 | 99 | 99 | 63 |
Fold − 5 | 4 | 5 | 0 | 1 | 78 | 60 | 100 | 100 | 60 |
| MEAN | 70 | 68 | 89 | 90 | 70 |
VGG19 | Fold − 1 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
Fold − 2 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
Fold − 3 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
Fold − 4 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
Fold − 5 | 5 | 5 | 0 | 0 | 100 | 90 | 95 | 96 | 90 |
| MEAN | 100 | 98 | 99 | 99.2 | 98 |
InceptionV3 | Fold − 1 | 3 | 4 | 1 | 2 | 60 | 45 | 89 | 90 | 60 |
Fold − 2 | 4 | 2 | 3 | 1 | 69 | 45 | 89 | 90 | 65 |
Fold − 3 | 5 | 5 | 0 | 0 | 68 | 63 | 99 | 99 | 68 |
Fold − 4 | 5 | 5 | 0 | 0 | 80 | 62 | 100 | 100 | 80 |
Fold − 5 | 3 | 5 | 0 | 2 | 69 | 32 | 70 | 73 | 68 |
| MEAN | 69 | 50 | 89 | 90 | 68 |
DenseNet201 | Fold − 1 | 3 | 5 | 0 | 2 | 90 | 80 | 100 | 100 | 89 |
Fold − 2 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
Fold − 3 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
Fold − 4 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
Fold − 5 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
| MEAN | 98 | 96 | 100 | 100 | 98 |
Xception | Fold − 1 | 4 | 3 | 2 | 1 | 75 | 86 | 70 | 73 | 80 |
Fold − 2 | 5 | 4 | 1 | 0 | 77 | 96 | 89 | 90 | 93 |
Fold − 3 | 5 | 4 | 1 | 0 | 80 | 96 | 89 | 90 | 93 |
Fold − 4 | 3 | 5 | 0 | 2 | 90 | 70 | 99 | 99 | 80 |
Fold − 5 | 5 | 5 | 0 | 0 | 91 | 60 | 100 | 100 | 75 |
| MEAN | 82 | 81 | 89 | 90 | 84 |
Googlenet | Fold − 1 | 4 | 5 | 0 | 1 | 80 | 100 | 100 | 100 | 81 |
Fold − 2 | 5 | 4 | 1 | 0 | 80 | 100 | 100 | 100 | 80 |
Fold − 3 | 5 | 4 | 1 | 0 | 90 | 100 | 100 | 100 | 90 |
Fold − 4 | 3 | 5 | 0 | 2 | 100 | 100 | 100 | 100 | 100 |
Fold − 5 | 5 | 5 | 0 | 0 | 100 | 100 | 100 | 100 | 100 |
| MEAN | 90 | 100 | 100 | 100 | 90 |
MobileNetV2 | Fold − 1 | 3 | 4 | 1 | 2 | 54 | 45 | 89 | 90 | 45 |
Fold − 2 | 4 | 2 | 3 | 1 | 58 | 45 | 89 | 90 | 45 |
Fold − 3 | 4 | 5 | 0 | 1 | 59 | 63 | 99 | 99 | 63 |
Fold − 4 | 5 | 5 | 0 | 0 | 63 | 62 | 100 | 100 | 62 |
Fold − 5 | 3 | 5 | 0 | 2 | 66 | 42 | 70 | 73 | 45 |
| MEAN | 60 | 52 | 89 | 90 | 52 |
InceptionResNetV2 | Fold − 1 | 4 | 5 | 0 | 1 | 75 | 80 | 90 | 84 | 89 |
Fold − 2 | 4 | 2 | 3 | 1 | 80 | 70 | 80 | 91 | 81 |
Fold − 3 | 5 | 5 | 0 | 0 | 80 | 70 | 90 | 90 | 75 |
Fold − 4 | 5 | 5 | 0 | 0 | 80 | 81 | 90 | 80 | 74 |
Fold − 5 | 3 | 4 | 1 | 2 | 80 | 80 | 89 | 87 | 76 |
| MEAN | 80 | 76 | 89 | 86 | 75 |
The acronyms in Table 5 are: True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), Accuracy (AC), Recall (RC), Specificity (SP), Precision (PC) and F1-Score (f − 1). Despite the fact that the Xception model demonstrated a moderate estimation of the precision of 82% as recorded in Table 4. Furthermore, the estimations of execution measurements of every deep learning classifier are introduced in Table 5. The most elevated accuracy of deep learning classifier to recognize just positive COVID-19 was accomplished by ResNetV2, InceptionResNetV2, Xception, and MobileNetV2, however, their relating exhibitions were most noticeably awful to group the typical cases accurately. Along these lines, it is suggested that VGG19, DenseNet201, and GoogleNet models can be applied for distinguishing the well-being status against the COVID-19 in X-beam pictures in inquiring about.
In another point-by-point execution, correlations of 3 models utilizing the examination information appear in Table 5. We have gotten the paramount execution as the precision of 98%, re-evaluate of 96%, as well as particularity estimation of 100% in favour of VGG19 pre-prepared replica. The most minimal presentation esteems have yielded a precision of 60%, review of 52%, and particularity estimation of 89% for MobileNetV2. Therefore, the VGG19, DenseNet201, and GoogleNet models give predominance above the other 2 models together preparing along with the testing stage.
The confusion matrix of all prepared deep learning classifiers with precision and cross-entropy for overlap 3 of the pre-prepared models in the preparation and approval step is calculated. The best scores of preparing and approval exactness were accomplished for VGG19, DenseNet201, and GoogleNet models, and the most pessimistic scenario is come about by the MobileNetV2, as delineated likewise in Table 4.
The dataset that support this case study are available from ResNetV2, VGG19, InceptionV3, DenseNet201, Xception, GoogleNet, MobileNetV2 and InceptionResNetV2 as shown in Fig. 8. Each colour in the figure emphasizes the performance measures such as accuracy, recall, specificity, precision, and F1-score.