In most cases, are intensity based detection methods using the textural features, invariant moment, and other factors. The classification task in this paper is to identify the subclasses of emphysema diseases. Figure 2 shows the PECS system for COPD lung emphysema detection and classification.
The PECS consists of three modules; preprocessing, features extraction, and classification as in many pattern recognition system.
3.1 Preprocessing Module
In the first module of PECS, the contrast of the CT image is enhanced by using Contrast Limited Adaptive Histogram Equalization (CLAHE). It uses local intensity histogram for the enhancement and thus the over enhancement by conventional histogram equalization is eliminated.
3.1.1 Contrast Limited Adaptive Histogram Equalization (CLAHE):
CLAHE is a version of Adaptive Histogram Equalization (AHE) that corrects for contrast over amplification. It works on patterns, which are tiny areas of an image rather than the full image. To remove the false borders, the nearby pixels are merged using bilinear interpolation. The contrast of images may be improved with this technique. CLAHE may also be used on colour images, When only the luminance channel of an image is equalized, the results are much better than when all of the channels of an RGB image are equalized. When applying the CLAHE technique, two parameters are to be considered,
- ClipLimit – This option determines the contrast limiting threshold. The default value for this field is 40.
- TileGridSize – This specifies how many tiles are in each row and column. This is set to 8x8 by default and aids to add tiles to an image while tiles are split.
In CLAHE, the transformation slope function causes the contrast amplification in the vicinity of a certain pixel value. The neighbourhood slope cumulative distribution function is proportional to the value of the histogram at that pixel value. Before computing the distribution function, CLAHE restricts the amplification by clipping the histogram at a predetermined value. The distribution function slope and its transformation function, is limited and shown in Figure 3.
Figure 4 shows the conventional histogram equalization and CLAHE output of contrast enhanced lung CT images obtained from the preprocessing of PECS system. It shows the image of NT, CLE and PLE sample image fed into the preprocessing phase. Figure 5 shows the HE and CLAHE texture descriptor pattern obtained after preprocessing of the sample PSE image given in Figure 4(c).
3.2 Feature Extraction Module
In the second module of PECS, a new texture descriptor, Laws Local Binary Pattern (L2BP) which is a combination of Laws texture map [1], [2] and Local Binary pattern (LBP) [3], [4] is designed. The process of extracting higher-level information from an image, such as colour, shape, and texture, is known as feature extraction.
3.2.1 LBP Features
LBP, a Textural descriptor is used in this study to sort COPD CT images into several classes. [5]. LBP was initially introduced in 1994, and it has since been proven to be a powerful texture classification feature extraction technique. The detection performance on CT image datasets is considerably enhanced when LBP is coupled with the Histogram of Oriented Gradients (HOG) descriptor. LBP was first developed to assess local image contrast before being used as a statistical and structural texture descriptor.
Applying LBP approach to the image provides this histogram as a feature vector. The LBP feature vector is generated as follows:
- Divide the image that needs to be examined into cells and further into pixels such as 16x16 pixels for each cell.
- Compare each pixel in a cell to each of its eight neighboring pixel (on its left-top, left-middle, left-bottom, right-top, etc.). Follow the pixels in a clockwise or counter-clockwise manner around a circle.
- Write "0" where the value of the centre pixel is larger than the neighbor's value. If not, write "1." This yields a binary number of eight digits.
- Calculate the histogram of the probability for each "number" occurring in each cell (i.e., each combination of the pixels that are smaller and that are greater than the center).
In this paper, the feature vectors may now be processed using a NN, an ensemble learning network to classify images. Texture analysis of lung CT images is performed with LBP feature extraction for diagnosing emphysema and its subclasses using two stage NN classifiers.
Figure 6 shows the neighbourhood pixel size used to create the patterns that aid to determine the vector features.
Textons are the histograms of texture elements that are used by the LBP to define textures. The local structure is used to specify the texture for each pixel. The intensity level variations between neighborhood pixels are used to extract binary information. Binary information is extracted using the intensity level differences between neighboring pixels. A pixel's intensity level serves with the threshold value for pixels nearby. Reading values in certain directions to represent distinct spots, curving edges, etc., is referred to as binary coding.
LBP has shown to be an excellent texture descriptor with a low computational cost. Because of its excellent quality, it has attracted a lot of attention from researchers, and several adjustments have been proposed throughout time. The original LBP operator creates binary codes or patterns using an 8-neighborhood method, which requires eight pixels around the centre pixel. For texture descriptors, the original LBP's rotation variation isn't always a good thing. Simultaneous intensity fluctuations, on the other hand, have no effect on it.
When eqn. (1) is applied to the entire image, LBP with distinct image pattern is generated as shown in Figure 7.
For each colour channel, all of these images are extracted without regard for the border. This phase resulted in LBP texture features.
3.2.2 LAWS texture map approach
Laws' approach is a feature extraction technique that uses local texture energy measures. In order to extract the micro structural components of the image such as Ripple, Wave, Spot, Edge and Level, five matrices are defined. Laws vectors are shown in Table 1. Based on these vectors, 25 kernels are derived and they are convolved with the CT image to extract textures.
Table 1
Laws vectors
Micro Structures
|
Matrices
|
Ripple (R)
|
[ l,-4, 6,-4, 1]
|
Wave (W)
|
[-1, 2, 0, -2, 1]
|
Spot (S)
|
[-1, 0, 2, 0,-1]
|
Edge (E)
|
[-l,-2, 0, 2, 1]
|
Level (L)
|
[ 1, 4, 6, 4, 1]
|
The Laws kernels for extracting textures are shown in Table 2. Laws created five labeled vectors that may be used to create two-dimensional convolution kernels.
Table 2
Laws kernels for extracting textures
|
Ripple (R)
|
Wave (W)
|
Spot (S)
|
Edge (E)
|
Level (L)
|
Ripple (R)
|
RR
|
RW
|
RS
|
RE
|
RL
|
Wave (W)
|
WR
|
WW
|
WS
|
WE
|
WL
|
Spot (S)
|
SR
|
SW
|
SS
|
SE
|
SL
|
Edge (E)
|
ER
|
EW
|
ES
|
EE
|
EL
|
Level (L)
|
LR
|
LW
|
LS
|
LE
|
LL
|
When these masks are convolved with a textured image, they extract specific structural elements. Texture is an important aspect of human vision. Regardless of window size, the texture energy conversion takes approximately the same amount of time.
By using the Laws kernels in Table 2, 25 new images are generated from one CT image having different texture measures. Instead of extracting one LBP from the raw CT image, it is extracted from the Laws texture map and thus 25 LAWS LBP's (L2BP) are extracted from the raw CT image as shown in Figure 8 and Figure 9. When image texture areas are known to be large, this would allow extremely large (highly dependable) texture energy windows to be used; however the number of buffered filtered image rows would also be large as depicted from Figure 8.
Normalization is used to modify the source image's brightness and contrast so that textures that differ only in luminance or contrast cannot be identified. Normalization is accomplished by removing the local average brightness from each texture energy row and dividing by the local average contrast. The Laws technique extracts secondary features from the image's intrinsic microstructure features using filter masks, which may subsequently provide L2BP feature extraction pattern for PSE pattern descriptor of Figure 5. This L2BP pattern features extracted are utilized for the emphysema subclass classification.
3.3 Classification Module
The third stage of the PECS classifies the CT image into NT, CLE, PSE or PLE. This stage uses NN classifier. Due to the larger optimization problem and high computational cost, the direct multi-class classification is not employed instead two different NN classifiers; NN-1 and NN-2 are designed. The former classifier classifies the input CT image into either NT or abnormal (CLE & PLE) and the later one predicts the abnormal CT images into either CLE or PLE.
3.3.1 Neural Network Classifier:
A neural network (NN) is a network or circuit of neurons made up of artificial neurons or nodes that was recently termed as an Artificial Neural Network (ANN) [17]. A NN is an artificial machine learning or deep learning [6] algorithm with image processing that is used to solve AI issues. In ANN, the connections of biological neurons are represented as weights between nodes. A positive weight represents an active link, whereas a negative weight represents a suppressive link. All inputs are given a weight before being combined together. This action is described as a linear combination. NN have been used to solve a wide range of issues. An ANN or Simulated Neural Network (SNN) with artificial neurons is an integrated set of artificial nodes linked into the convolutional layer that process data using a mathematical or computational framework on a contributed to the increasing method to computing.
An ANN is an adaptive system that changes its structure in response to data flowing through a network, whether external or internal, in most cases. Nonlinear statistical data analysis or decision-making tools in a more efficient manner are nonlinear statistical data analysis and decision-making tools. They can be used to analyze complicated input-output connections and to examine the patterns in images. In this module, NN approach is employed for image classification [19]. Deep learning model consists of more hidden layers between input and output layers with nonlinear relationships. Whereas, traditional a simple NN architecture normally have one hidden layer is shown in Figure 10.
NN may be applied in a variety of domains such as; Time series prediction and modeling are included in function approximation, or regression analysis. Image processing, signal processing, filtering, clustering, speech, handwritten, pattern, sequence, text recognition, medical diagnosis etc., are the various applications of NN.
In the proposed PECS model, the two stage NN classifiers (NN-1 and NN-2) are used. First, the NN-1 classifier is used to classify the images as either normal tissue (NT) or abnormal tissue (CLE & PLE/PSE), and then the NN-2 classifier is used to classify the abnormal CT images as either CLE or PLE or PSE.