A Hybrid Methodology for Flower Images Segmentation & Recognition with extended Deep-Convolution Neural Network (CNN)

-- Deep-Convolution Neural Network (CNN) is the branch of computer science. Deep Learning CNN is a methodology that teaches computer systems to do what comes naturally to humans. It is a method that learns by example and experience. This is a heuristic-based method to solve computationally exhaustive problems that are not resolved in a polynomial computation time like NP-Hard problems. The purpose of this research is to develop a hybrid methodology for the detection and segmentation of flower images that utilize the extension of the deep CNN. The plant, leaf, and flower image detection are the most challenging issues due to a wide variety of classes, based on their amount of texture, color distinctiveness, shape distinctiveness, and different size. The proposed methodology is implemented in Matlab with deep learning Tool Box and the dataset of flower image is taken from Kaggle with five different classes like daisy, dandelion, rose, tulip, and sunflower. This methodology takes an input of different flower images from data sets, then converts it from RGB (Red, Green, Blue) color model to the L*a*b color model. L*a*b has reduced the effort of image segmentation. The flower image segmentation has been performed by the canny edge detection algorithm which provided better results. The implemented extended deep learning convolution neural network can accurately recognize varieties of flower images. that is maximizing up to +1.89% from state of the art.


I. INTRODUCTION
Ayurveda prompts plant, leaf, and flower image segmentation and recognition. Image segmentation refers to separating an image into multiple areas [1]. The objective of segmentation is to make things easy and refine the representation of an image into something that is more expressive and easier to analyze. Image segmentation is normally used to trace the boundaries (lines, curves, etc.) and objects in the image. Edge detection is the segmentation process which identifies the edges of an image. There are various algorithms for edge detection like canny edge detection, watersheds, etc.
The flower image segmentation and recognition are difficult tasks caused by the broad assortment of flower image types, based on their amount of texture, color distinctiveness, shape distinctiveness, and different size. Various categories of the flower image are similar, as shown in figure 1, both in shape or color because of inter-class similarity.

Tulip Rose
Daisy Sunflower The deep convolution neural network is used as a tool for image processing to solve various problems [2] such as Object classification, object detection, face recognition, and semantic sub-splitting of the image, etc. In semantic image segmentation, the convolution neural network takes an image as input. After processing the input image it provides output as sub-splitting of the original image where each class is identified with mathematical value and color. Problem Definition -In Ayurveda various types of flowers like rose, bhringraj, hibiscus, etc are used for making medicines. Ayurveda prompts to identify and recognize the flower images. But identifying and recognizing an image is a very difficult task because different species of flowers may look identical in shape as well as in color. It is a very challenging task to maximize accuracy and minimize the error rate of image segmentation and detection of RGB flower images. Similarly, RGB images are considered five-dimensional problems. Since three dimensions are used for red, green, blue. The two dimensions are used for the luminosity layer and chromaticity layer which as solved by L*a*b color space. It is also considered as a complex computational timeconsuming problem.
To solve this problem proposed system utilize an optimization method of deep convolution neural network for identifying and segmenting flower image and L*a*b conversion. It is a fast and accurate image segmentation method. In summary, contributions of the proposed research methodology are as follows.
1) Developed and implemented a hybrid method that reads the number of N different colored flower image from data sets.
2) Every color image has five attributes, first three for colors Red, Green, Blue, and two attributes for the geometry luminosity layer and chromaticity layer. To provide better results and minimize the effort of image segmentation every image is transferred into L*a*b color [2] [3].
3) Then resultant uniform size flower images are segmented using a canny edge detection algorithm which becomes an essential field of research work to improve the experimental results and identification of various objects in image [4].
4) The segmented flower images are grouped into different cluster. In a cluster, each pixel is similar according to some properties such as color, texture, and intensity.

5)
To design an extended deep convolution neural network from resultant image datasets with one hidden layer for prediction of flower images. The deep convolution neural network is structured in computational layers interchanging amongst the convolution layer and max-pooling layer.
6) In the last predicted various flower images with proposed hybrid methodology on flower image datasets.
In this paper a novel extended hybrid deep learning convolution neural network is developed and implemented for flower image detection and segmentation. In the proposed algorithm an RGB image is first converted into L* a*b image to reduce the efforts in image segmentation and provide effective results. Then an extended deep CNN is trained for detection of flower images with one hidden layer. In the end, the various flower images are recognized by extended deep CNN and the results are calculated. New better flower image segmentation schemes are thus derived, which overcome the problems of conventional systems, by maximizing learning accuracy up to 98% i.e. increasing +1.89% from state of the art and should substantiate augmented flower image segmentation problems.
The remaining sections of the paper are structured as follows: Literature review of research work is highlighted in Section 2. The proposed methodology is illustrated in Section 3. Section 4 shows the experimental setup. Section 5 gives results analysis. In the end, the proposed work is concluded in Section 6 based on the experimental results.  This section reviews some state of arts in the area of flower image segmentation. Hazem et al. proposed a flower image segmentation, classification, and detection method using the deep learning method [6]. This approach tested the three well-known datasets and outcomes were equated with supplementary methods. The accuracy of the proposed methods is up to 97%. The proposed approach of flower images segmentation used localization of the minimum bounding box around images and then utilized binary classification in a convolution neural network for detection of flower images.
Rashmi et al. proposed a new architecture of the convolution neural network to implement mobile Net applications for the classification of flower images by reducing the size and latency of original images. The application was small, efficient, and optimized the results [7]. The experiment was examined on the TensorFlow platform and results shows system required minimize time and space of flower images classification.
A flower classification approach proposed by Xiaoling et al. used the Inception-v3 model of the TensorFlow platform [8]. This flower image classification technique utilized the transfer learning method. The accuracy of flower image classification was up to 95% on Oxford 17datasets and 94% on 120' flower datasets.
A new Convolution Neural Network approach proposed by I.Gogul et al. trained the small data sets and utilized general purpose computation resources [9] the approach classified The flower image into three different parts and then extracted the features of flower images by using Over Feat CNN in training data sets. The proposed system utilized machine learning techniques for training a neural network.
This proposed system had accuracy up to Rank-1, 82.32% and Rank-5 accuracy of 97.5% using machine learning classification FLOWERS 28 dataset [10]. The researchers proposed a flower recognition approach with Support Vector Machine classification [11]. This approach used natural flower images for image segmentation purpose and learning accuracy of the system was up to 85.93%.

Image Segmentation
The image segmentation method improves the performance of object recognition and identification. Image segmentation is a technique of partitioning an image into a segment that is similar according to sets of predefined criteria. segmentation faces an inherent tension between semantics and location [12]. Region-based classification is a standard technique for semantic segmentation that can be easily applied on flower image segmentation [13]. Flower image segmentation is the technique of detecting edges of flower image that make into a different group. Pixels or spots in a group are similar according to some properties of a group such as color, texture, and intensity.

L*a*b color Space
The color model based on L*a*b space is q layered design method in which the red-green axis denoted by chromaticity layer 'a*'. Similarly, blue-yellow axis is denoted by chromaticity layer 'b* and luminosity is denoted by 'L*' layer. The color image has five attributes, first three attributes for the color red, green, blue, and two attributes for the geometry luminosity layer and chromaticity layer. To provide a better result and minimize the effort of image segmentation, every image is transferred into L*a*b color model. The conversion between the RGB color model and the L*a*b* color model is not a straight forward process.

Artificial Deep learning Convolution Neural Network
The biological neural network is the inspiration of convolution neural network [14]. Artificial deep learningbased convolution neural network is a mathematical model. Mathematical models are weighted function, input function and transform function [15]. Figure 2 represents the architecture of artificial deep convolution neural network, where R is size of input and corresponding output.
Where f represents mathematical model, a represented activation function, w represents a weight function, b represents a biased value, and p represents input value. presented an improved technique to automatically identify cerebral microbleeds from magnetic resonance volumes, leveraging 3D convolution neural network [16]. The proposed system utilized a 3D convolution neural network for recognition of cerebral microbleeds for magnetic resonance images. The authors compared experiment results with previous method that was based on the 2D convolution neural network. The proposed method had enhanced recognition accuracy compared to other methods. This method used for image recognition and sub-splitting tasks with an advanced application of a 3D convolution neural network for volumetric medical datasets.
The authors Jonathan et al. implemented fully convolution neural networks for semantic segmentation problem and improved the architecture with a multi-resolution layer [17]. Fully convolution neural networks take input of arbitrary size and produce output with the corresponding size. The authors utilized Alex Net, VGG Net, and GoogleLeNet with fully convolution neural networks to generate efficient and meticulous semantic segmentation of images. This combination dramatically improved the state-of-the-art, while simultaneously simplified and speeded the learning rate and inference.
Qing et al. proposed a customized convolution neural network architecture to categorize HRCT lung picture patch of ILD pattern [18]. The system designed a convolution neural networks with a single layer that learns the DCT pattern from training samples capably and produced efficient classification results. The experiment results show that the proposed approach is capable of extracting discriminative features automatically and achieve efficient target performance. In the experiment observed that the limited size of training data and blurry structure were complicated in adjusting the convolution neural network to ILD image classification.
Deep convolution neural networks were produces efficient results in the field of image recognition, categorization with a large size of data sets [19] [20]. Several researchers are working to achieve efficient classification results or recognition of objects accurately without considering the computation complexity [21] [22] [23]. The additional hardware requirement of the convolution neural network is graphics processing units or titanium processor to process large datasets.

Classification
Classification of flower image is a challenging task due to the intra-class variability with different lighting conditions. The proposed flower image segmentation method used a binary classification technique to detect flower images. The use of extended deep convolution neural networks [5] is for classification problems where the output to an image is an alone class label [24]. However, in many computer vision problems, particularly in biomedical image processing the desired output should include localization [25] [26].
It is similar to flower image classification because of varieties of flower images, having similarities in various groups. Satoshi et al. implemented a color classification of an object that utilized three heterogeneous co-occurrence colors such as CoHD, CoHOG, CoHED [27]. The results produced by that method efficiently achieve a high classification rate due to high dimensional and highly discriminating co-occurrence features.

III. PROPOSED METHODOLOGY
This section proposed, a hybrid methodology for identifying and splitting up of flower image using the L*a*b color space with an extended deep convolution neural network. The improvement of artificial neural networks is done by a deep learning approach. Deep learning having supplementary layers that authority higher levels of abstraction and increasing objects detection rate. Figure 3 describes the proposed hybrid methodology for flower image segmentation and recognition with L*a*b color space by an extended deep convolution neural network. The proposed algorithm work as follows: Step 1: N number of different flower images are read from the data sets in which each image has five attributes, three attributes of color ( Red, Green and Blue) and two attributes for geometry (luminosity layer and chromaticity layer). The conversion between RGB to L*a*b color space [3] improved the performance of image segmentation and produced excellent results. Step 2: Then resultant uniform size flower images are segmented using a canny edge detection approach which becomes an essential field of research work to improve the experimental results and identify various objects in images. Flower image segmentation is the technique of detecting edges of flower images that make into different groups. Pixels or spots in a group are similar according to some properties of a group such as color, texture, and intensity. Figure 4 shows below is results of conversion between RGB to L*a*b color space with canny edge detection flower image segmentation downloaded from the experiment. Step 3: An extended deep convolution neural network is then trained from flower image data sets. There are various ways to modify the deep convolution neural network like altering the input image type, sliding stride, filter size, and filter types. In the proposed methodology modified input layer. The input layer reads L*a*b image instead of the RGB image for learning.
1) The first image input layer, grips the processed pixel value of the image in which image is represented by height and width with dimensions of the color for normalization of an input image.
2) The second convolution layer calculates the results of neurons that are associated with restricted areas in the participation. Each node calculated scalar multiplication between their weights and input value which are associated with the input region.
These may consequence in size and figure 5 represents the convolution layer. This figure is taken an input array of a pixel with different kernels size and generates an output pixel in the form of an array. Layer of a fully connected convolution neural network described by equation 2: Here, x represents input vector, z represents output vector, W represents weight matrix, b represents bias vector and (q) represents activation function. The figure 6 described working on convolution layer which is the output of dot product of two matrixes: one is image matrix and the second is filtered (kernel) matrix with different load values [28]. For example, a convolved feature value equal to 4 is calculated by the dot product of the filter matrix and image matrix, 1*1+0*0+1*0+0*1+1*1+0*0+1*1+0*1+1*1 = 4 and the same calculation is repeated for all regions. Convolution layer is related to convolution of two signals represented by the equation 3: 3) The threshold operation is performed by third layer called ReLU layer. In ReLU layer, any value of weight less than zero are assigned to zero.
The ReLU layer not changed the size of volume that is and produces an activation function that is max (0, x) with a threshold value at zero. Equation 3 show activation function with input value x and corresponding output function f. ReLU considered as the nonlinearity in the convolution neural network.

Dimensions of output volume =
The dimensions of output volume represented by equation 5, in which W represented input volume, F represented size of the filter, where P represented the amount of padding, and S represented the stride. Figure 7 show ReLu layer used by the proposed method.   6) The fifth layer is a fully-connected layer in which those neurons are considered that are fully connected in the Maxpooling layer. It is a fully connected layer for convolution neural network with activation function like Soft-Max for various classification approaches. It produces resultant output with dimension, volume [1*1*N] where N represents the number of classes for classification. 7) Multi-class classification problems solved by calculating cross-entropy loss function on the classification layer by mutually exclusive classes.
Step 5: Different varieties of flower images are classified from datasets.
Step 6: A graph is plotted between the number of iterations and accuracy, and similarly between the number of iterations and error rate.
The proposed method is simulated with the experiments. The objective of testing is towards improved training accuracy and minimizing the error rate for different flower images using extended deep learning convolution neural network-based algorithm [29].

IV. EXPERIMENT SETUP
The proposed method is simulated and results are analyzed for the publicly available five flower families taken from Kaggle. The five flower families are Daisy, Dandelion, Rose, Sunflower, and Tulip. The proposed extended hybrid methodology is experimented with coloured flower images from Kaggle's flower image data sets with size of 512*512 pixels [30]. In this experiment system uses 80% images for training and 20% images for testing.The Figure 9 shows some flower images from different families of flowers from Kaggle's flower image data sets.

Evaluation Metrics
There are a number of parameters used to calculate the accuracy of flower image classification, segmentation, and detection in the suggested method.
Error rate = 1 − = Accuracy is measured as the ratio of the count of accurate labels inside the test flower image datasets corresponding to the classification from classify, to the count of images in the test flower images.
Error rate defined as one minus accuracy if maximize accuracy then minimizes the error rate.

Pseudo code
Algorithm: DCNNFCD: Deep learning convolution neural network for flower image classification and detection Input: NOI: Number of flower images im: input the RGB color flower images Output: Trained neural network for flower image classification and reorganization. 1: for block:= 1 to NOI 2: Read flower image 3: Resize flower image 4: Convert the RGB images in l*a*b space.

5:
Apply canny edge detection algorithm to image for texture feature 6: Combined the a*b images with edge image 7: Repeat step 2 to 5, for each image and collect image into one array. 8: Apply the CNN on the image array with defined classes. 10: Train the CNN with number of iterations = 20 and initial learning rate 0= 0.0001 11: Similarity test image data sets are also prepared. This data is proposed and tested using trained CNN. 12: The Accuracy, Computation time, Error rate has been calculated and displayed.

Implementation Details
The proposed system experimented in two different sections, Experiment_1 and Experiment_2 with different hardware configurations and the different number of processed images. In Experiment_1, minimum hardware configuration and only 100+ images were used whereas in Experiment_2 extra hardware GPU with 32GB RAM and 1000+ images were used.

Experiment _1
In this experiment, 5 to 20 different images, each of five flower families (Daisy, Dandelion, Rose, Sunflower, and Tulip) were used. During the experiment_1, 100 images were used by the proposed method from the database.

Implementation Details for Experiment 1
The The flower image segmentation and classification utilizes the convolution neural network, in which first all images are resized to 256×256×3 to provide a united and normalized combination of images that pass through the convolution neural networks. This achieves efficient computational time complexity of convolution and maxpool layer. The size of all kernels of the first five conventional blocks in the Flower Classification Neural network and the CNN is 3×3. Table 2 represents the simulation parameters used by the proposed extended hybrid algorithm.

Experiment _2
In experiment 2, 165 different images of each type of flowers used, so that total of 825 images was processed in the database.

Implementation Details for Experiment 2
The implementation for experiment_2 was done in Matlab 2019a using deep learning Tool Box and results were calculated on an Intel(R) Xeon(R) W-2133 cpu@3.60GHz 32.0 GB RAM running window 10 Pro for Workstations and 64-bit Operating System, x64-based processor. Simulation parameter used by the experiment-2 was same as experiment_1 which is shown in Table 2. The flower image segmentation and classification utilizes extended convolution neural network in which first all images are resized to 64×64×3 to provide a uniform and normalized combination of images that pass through the convolution neural networks.
NOI (Number of images of each class) = 165 and Training on single GPU. Initializing input data normalization.

Results Analysis of Experiment 1
Experiments were conducted over downloaded flower image datasets from the website Kaggle. In Table 3 the first column shows the NOI (number of images of each type of flower and there are five types of flower used by the proposed algorithm for simulation), the second column represents epoch, and the following columns represent the time elapsed, loss, accuracy, and learning rate respectively, used by the proposed algorithm. The training was done on a single CPU by initializing flower image normalization by 256*256*3. The proposed algorithm was experimented with different number of flower images and maximum epoch was 20.
From the results, proposed method shows that when NOI = 4 and epoch is equal to 20, then the time required for training is 137.57s and accuracy is 100% with minimum batch loss equal to 0.0002 and learning rate is 1.00e-04. Figure 10 gives an average of the computation time in seconds taken by over the proposed algorithm for training flower image with NOI (number of images). But when NOI=20, total images=40, then at epoch =20, the time required for learning rate is 266.59s and accuracy decreased only 95.00% with loss 0.1353.   Figure 11 shows the graph between NOI (number of images) with the average accuracy over the proposed algorithm.
 Figure 11: The graph between NOI (number of images) with the average accuracy over the proposed algorithm.
The proposed algorithm achieved an average accuracy of 95.84% and an average loss rate of 0.1837. Figure 12 shows graphically, the relation between NOI (number of images) and the average minimum-batch loss over the proposed algorithm.

NOI (Number of Images)
Graph between NOI (number of images) with the average accuracy over the proposed algorithm The percentage of wrong detection of flower is still high and needs further improvement. The proposed deep learningbased convolution neural network [32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48] provides best flower classification results on all datasets. Fully convolution neural networks is a rich class of models in which modern classification convolution neural network is a special case. Figure 13 (a, b, c, d) shows detected flower image which was downloaded from mathwork group during the experiment.

Results Analysis for experiment_2
The training results are shown in  Columns in Table 5 shows the NOI (number of images of each type of flower and there are five types of flower images used by proposed algorithm for simulation), epoch, time elapsed, loss, accuracy, and learning rate, used by proposed algorithm, respectively. The training was done on a single GPU with initializing flower image normalization by 64*64*3.
The proposed algorithm experimented with different number of flower image and maximum epoch was 20. From the results of proposed method, it is clear that when NOI = 165 and epoch is equal to 20, then the time required for training is 00.55s and accuracy up to be 98% with minimum batch loss equal to 0.4728 and learning rate is 1.00e-04. The figure  14 gives a Training progress graph of the Accuracy and Loss with number of iteration over the proposed algorithm when run on GPU with NOI = 165 of each class (number of images).  Table 6 shows the flower image detection time taken by proposed algorithm. When the proposed algorithm is executed on a high computation setup, it has an approximately 98% accuracy. This result compared with experiment_1 and it seems that the time required to detect the flower image by the proposed system is less when the same method runs on a high computation setup.  Experimental outcomes show that the accuracy of deep CNN depends on the weight function on the different layers if the value of weight is changed the learning accuracy is affected by the proposed method.  The learning accuracy of deep convolution neural networks also depends on the Learning Rates, Batch Loss value, and types of images used for training deep convolution neural network.  The accuracy is also affected by hardware configuration used by the implementation purpose.
The proposed deep learning-based convolution neural network provides the best flower classification results on all the datasets. There are various convolution neural network architectures like LeNet, AlexNet, GoogLeNet, VGGNet used to solve deep learning computer vision problems. In a future enhancement, the proposed system would be compared with a pertained network like LeNet, AlexNet, and GoogLeNet. The proposed system is applicable for Image Classification, Object Recognition, Content-Based Image retrieval system, Self-driving cars, Speech Recognition, etc.

Declarations
Ethics approval and consent to participate Not applicable.

Consent to publish
Not applicable.