The training system utilizes “an anacondaJupyter notebook by tensor flow” mechanism as elucidated by Vinod et al. [41] to train the broad image dataset. Access to the required libraries was done at the starting phase to connect the code from other components. After that, uploading the image dataset to the path, then applying gradient mapping to the images and extracting the features with the help of spatial and frequency regions viz. “GLDM, FFT, DWT, GLCM as well as Texture” for disjuncture. Finally, we have concatenated all 200 haralick lineaments in both the regions and called an Ultra Covix technique to identify the COVID19 individuals facilitated by the random forest algorithm, as exhibited in Fig. 2. The image preprocessing process as exhibited in algorithm1 and feature eradication of the pictures are illustrated in algorithm2. Finally, categorization and efficiency measures of the system are exhibited in algorithm3 for an Ultrasound image dataset amid multiple classifications.
Algorithm 1: Lung Ultrasound Image database for Preprocessing

Input : Input Image I(e, f, g) (e, f, g) Ɛ (1,2,….n)3, e = f = g

Output : Output Image O(e, f, g) (e, f, g) Ɛ (1,2,….n)3, e = f = g

Begin

For each Input image, I do

For {e, f, g} = 1 to n do

Apply gradient mapping for producing the heat maps using Eq. (5) on the sample
images

Convert input image I into gray and resize the image with 224*224

Apply minmax normalization and return the output image(O)

End For

End For

Where e, f, g are the labels in the ultrasound picture, dataset and n are the pictures' sum. 
Algorithm 2: Lung Ultrasound Image database for Feature Extraction

Input : Input Image O (e, f, g) (e, f, g) Ɛ (1,2,….n)3, e = f = g

Output : Ultra Covix model

Begin

For each Input image O, do

For {e, f, g} = 1 to n do

IF file ← e, Then

Label ← 0
ELIF file ← f,
Label ← 1
ELSE
Label ← 2
ENDIF

End For

End For

F ← {Calculating features on input image O (12 Haralick features)}

Textual, T ← Compute F on the input image, O
FFT, F ← Compute F on the input picture, O
DWT, d1 ← Compute F on the input image, O
DWT, d2 ← Compute F on Approximation of d1 image
D ← d1 + d2
GLDM, G ← Compute F on input image O in four directions using Eq. (6)
GLCM, g ← Compute F on input image O in four directions
Ultra Covix model ε O (Total F=200)
Save the model

Algorithm 3: Train the Lung Ultrasound Image dataset felicitated by Ultra Covix model

Input : Ultra Covix model, number of instances = 100, Testing ratio = 30%, Training ratio = 70%, number of trees = 100, Classifier = Random Forest.

Output: Confusion Matrix, Visualization, Performance metrics, Training and accuracy loss.

Begin

For Input Ultra Covix model, do

x = concatenate {(e(F), f(F), g(F))} ε (1,2,…n)3 (inputs)
y = concatenate {(e(F), f(F), g(F))} ε (1,2,…n)3 (target labels)

End For

4.1 Gradient Visualization
Gradient mapping is a prominent mechanism for the model. It elucidates that it employs comprehensive average pooling and admits to estimating classspecific heat maps that reveal the discriminative areas of the image that provokes appropriate class activity of interest [38]. Gradient mechanism on a basic inference that the eventual count Xd for a specific category d can be exhibited as a linear combination of its comprehensive average pooled last convolutional layer feature maps Bi.
Xd = \(\sum _{i}{g}_{i}^{d}.\sum _{m}\sum _{n}{B}_{mn}^{i}\) (1)
Respective spatial location (m, n) in the categoryspecific saliency map Sd is then estimated as:
\({S}_{mn}^{d}=\) \(\sum _{i}{g}_{i}^{d}. {B}_{mn}^{i}\)(2)
\({S}_{mn}^{d}\) , precisely associates with the relevance of a specific spatial location (m, n) for a particular category d and thus objectives as perceptible information of the category predicted by the network. Class activation map evaluates these weights \({g}_{i}^{d}\) by training a linear classifier for each category d utilizing the activation maps of the last convolutional layer accomplished for a given picture, the weights \({g}_{i}^{d}\) for an appropriate feature map \({B}^{i}\) and the category d is equivalent to:
\({g}_{i}^{d}\) = Y. \(\frac{\partial {X}^{d}}{\partial {B}_{mn}^{i}}\) \(\forall \left\{m, n \rightm, n \in {B}^{i}\)} (3)
Where Y is a fixed, utilizing gradients flowing from output category into activation maps of last convolutional layer as neuron importance weights\({g}_{i}^{d}\).
\({g}_{i}^{d}= \frac{1}{Y}\) \(\sum _{m}\sum _{n}\frac{\partial {X}^{d}}{\partial {B}_{mn}^{i}}\) (4)
The class selective saliency maps for a given image, Sd are then estimated as a linear combination of the forward activation maps, ensued by a ReLU activation function. Each spatial aspect in the saliency map Sd is then calculated as:
\({S}_{mn}^{d}=ReLu\) (\(\sum _{i}{g}_{i}^{d}. {B}_{mn}^{i})\) (5)
For medical applications, gradient mapping or their conjecture GradCAMs [39] can afford relevant decision support by deciphering either a mechanism that detects its positioning on precise pathological arrangements. Furthermore, gradient mapping can guide medical practitioners and point to descriptive arrangements, notably compatible in timesensitive or insightsensitive positions. The gradient visualization provides various segmentation approaches like FFT, Wavelet, GLCM, GLDM, and Texture to diagnose COVID19 individuals that distinguish to various diagnosis methods and conclusions were initiate better performance by observation.
Figure 3. exhibits the gradient mapping technique on the lung ultrasound images with and without COVID19. For a more visual evaluation, we estimated the point’s maximal activation of the gradient mapping of each class (COVID19, Normal, and Pneumonia) and all the database pictures. While the heat maps are adequately scattered across the probe, pneumoniaassociated features localized at the center and bottom, notably related to COVID19 and Normal patterns.
4.2 FFT Based Segmentation
Fast Fourier Transform (FFT) gauges the discrete Fourier transform (DFT) just as its reverse. The FFT advancement is used to novitiate a digital signal (d) between range (r) from the time locale into a recurrence district (R), contemplating the amplitude of vibration dependent on its encouraging among the recurrence as the sign arises.
The recurrence range vector is parted into different frequencies to robotize the selection technique for the sensitive frequencies to the deficiency under investigation. The normal of every degree is then taken as a tangible viewpoint for the substance. In FFT, we have determined the 12 measurable elements in all pictures.
4.3 GLCM Based Disjuncture
Gray Level Cooccurrence framework (GLCM) is a scientific cycle comprehensively used to portray pictures and essentially for Second Harmonic Generation (SHG) collagen image grouping. This framework contemplates the spatial association among the image pixels at a specific point. Ordinarily, it is estimated for four directions at a specific range. Over this lattice, a textural feature is persistent. Routinely, different directions are differentiated or found in the center value to get an original estimation limit.
The cooccurrence lattice is officially characterized as the likelihood of grey level p happening in the neighborhood of another grey level q at a distance f in course C, S (p, q  f, C), where f is a removal vector, f = (∆c, ∆i). The direction C is one of the eight directions. It usually is disregarded the distinction between reverse directions, and afterward, symmetric likelihood lattices can be utilized uniquely for four directions 0ᶿ, 45ᶿ, 90ᶿ, and 135ᶿ. Accurate estimates extricate picture highlights from this lattice. In GLCM, we have determined the initial ten measurable provisions in four spatial areas for entire pictures.
4.4 GLDM and Texture Based Segmentation
Texture as a picture feature is incredibly significant in many picture handlings, just as computer vision applications. An exhaustive study on surface assessment in the image refining study uncovered the fundamental focus on hand, division, and association. The surface features have been used in different applications, for example, satellite and aeronautical picture assessment, clinical picture examination for recognizing varieties from the anomalies, recently in picture recuperation using the surface as a descriptor. This part gives an approach to depicting the surface using a multiband deterioration of the picture with application to portrayal, segmentation, object acknowledgment, and image recovery. In the surface examination, we have determined the twelve factual provisions from every one of the pictures.
Grey level difference method thickness capacities with regards to the prehandled grey picture. This procedure is used for eliminating the whole surface features of a highlevel picture. Contrast is portrayed as the change in thickness among the most essential and least thickness stages in a picture. In this way, the local assortments are on the grey level. We have determined the initial ten measurable components in four spatial areas amid range, t = 8 from reference and neighbor pixels (a, b) in overall pictures in the data set.
u (a, b) = v (a, b) – v (a, b + t) (6)
Where v is the input picture, u is the output of picture v, t is the distance for GLDM estimation.
4.5 WaveletBased Segmentation
A discrete wavelet transform is consistently portrayed as a nonexcess tried CWT. The wavelet change intends to abode a discretetime plan, x(s), as many (wavelet) coefficients. These coefficients are examined from a CWT, commonly to produce a balanced set of reasonable limits. Wavelet arrangements are plentiful, with fluctuating characteristics. This part, though, is restricted to the circumstance of even wavelets with little assistance.
There are a couple of similar viewpoints from which the wavelet can be regarded. Here we choose to examine the wavelet through the possibility of a channel bank. A few finite impulse response (FIR) channels with N coefficients are portrayed. One of these channels is highpass, while the second is lowpass; the two channels cut on/off at an enormous part of the examining repeat. Wavelet transform can be described by using these channels as well as employing them recursively. The channels are first employed to the input time course of action to produce lowpass and highpass sections, X1(s) and X2(s), independently:
X1(s)=\(\sum _{l=0}^{N1}{e}_{l}y(sl)\) (7)
X2(s)=\(\sum _{l=0}^{N1}{f}_{l}y(sl)\) (8)
Where\({ e}_{l}\), and \({f}_{l}\), are the coefficients of the lowpass and highpass filters individually. It is entirely conventional to assemble the highpass channel subject to the lowpass channel, which is broadly adapted using the turning flip arrangement, so the two plans of channel coefficients are associated through:
$${f}_{l}={(1)}^{l}{e}_{Ml}$$
9
The yield of the two filters is a vast segment of the input progression transmission capacity, such that X1(s) includes the lower recurrence range and X2(s) the upper band. The yields of every one of the channels are a high ratio of the first data transmission of X(s) to provide these double cross arrangements enclose colossal information.
Here, we have done twoway sequential coefficient operations such as (Coefficient Approximation) CA1, (Coefficient Horizontal) CH1, (Coefficient Vertical) CV1, and (Coefficient Diagonal) CD1 as shown in Fig. 4. Repeatedly CA1 determined more wavelet coefficients; for instance, CA2, CH2, CV2, and CD2 for each successive coefficient, we have determined the twelve factual features.
4.6 Implementing Random Forest Algorithm
Random Forest is a supervised learning technique employing for categorization as well as regression disputes. Nevertheless, it is primarily using for categorization disputes. We envisage that a forest involves trees, and more trees indicate a highly effective forest. Generally, the random forest technique generates decision trees on data samples and finally gets the forest from every one of them eventually chooses the finest explication by choosing. A group mechanism is preferable to an individual decision tree since it curtails the overfitting by averaging the result. The mechanism of the random forest classifier is exhibited in algorithm4.
The Random Forest technique, an ensemble machine learning mechanism commonly famous for its better accomplishment over various machine learning classifiers, was adopted for this system. We elicited random forest techniques on subdivisions of COVID19 samples, characterize them by Normal as well as Pneumonia samples. The number of trees and the selected number of samples are 100 is utilized in the random forest technique to refine if the batch prophecy is being executed.