Script Identification in Handwritten Documents for Gurumukhi-Latin Script using Transfer Learning with Deep and Shallow Classifiers

Script identification at character level in handwritten documents is a challenging task for Gurumukhi and Latin scripts due to the presence of slightly similar, quite similar or at times confusing character pairs. Hence, it is found to be inadequate to use single feature set or just traditional feature sets and classifier in processing the handwritten documents. Due to the evolution of deep learning, the importance of traditional feature extraction approaches is somewhere neglected which is considered in this paper. This paper investigates machine learning and deep learning ensemble approaches at feature extraction and classification level for script identification. The approach here is: i. combining traditional and deep learning based features ii. evaluating various ensemble approaches using individual and combined feature sets to perform script identification iii. evaluating the pre-trained deep networks using transfer learning for script identification ’iv. finding the best combination of feature set and classifiers for script identification. Three different kinds of traditional features like Gabor filter, Gray Level Co-Occurrence Matrix (GLCM), Histograms of Oriented Gradiants (HOG) are employed. For deep learning pretrained deep networks like VGG19, ResNet50 and LeNet5 have been used as feature extractor. These individual and combined features are trained using classifiers like Support Vector Machines (SVM) , K nearest neighbor (KNN), Random Forest (rf) etc. Further many ensemble approaches like Voting,Boosting and Bagging are evaluated for script classification. Exhaustive experimental work resulted into the highest accuracy of 98.82% with features extracted from ResNet50 using transfer learning and bagging based ensemble classifier which is higher as compared to previously reported work.


Introduction
There are a large number of writing systems and scripts in the world having a wider variability in terms of character sets, writing styles, shape of characters etc. It makes the task of developing a single recognizer for all scripts and writing styles difficult (perhaps close to impossible). In the multilingual country like India, many documents contain text written in more then one language or script which makes obligatory to design Optical Character Recognition (OCR) for multilingual CR. Hence, the solution proposed by many researchers is integrating the script identification module with character recognition to select the OCR from pool of OCR corresponding to the script selected.
Automatic OCRs has three essential sub processes: text localization, character segmentation and recognition. Further multilingual OCR, has script identification sub-process for character recognition to select a particular OCR. Due to ambiguity in structure and style of leading languages, script identification in multilingual inscriptions has become a burning area of research for the pattern recognition community [1]. Moreover, Script identification facilitates many applications such as document recognition; penetrating online libraries of image documents and image document sorting etc.
Script identification methods can be broadly classified into two categories: local and Global methods. Local methods are subject to smaller components of document like characters, words and text lines while the global methods consider the larger parts of documents like pages, paragraphs etc.
Global methods require the document part to be normalized and free of noises to get the correct results. On the contrary, local methods can deal with images having low quality and noise [13]. Script identification is usually performed at page, text line, word and character level. Many researchers have attempted the first found approaches but the research for script identification at character level is still a thought-provoking issue and has not been dealt with much [3].
Script recognition in handwritten documents is more challenging due to high ambiguity of resemblance between characters of different scripts in handwritten data compared to printed. To capture such variations, different kinds of features were handcrafted and extracted under explicit feature based approaches [5]. In handcrafted features, accuracy of results is biased towards the approaches applied for feature extraction and the pattern type. It is a two step process: (1i) identifying the characteristics regions of an image and (ii) finding the descriptor to distinguish each region from another.
Besides handcrafted features where human experts are involved in designing a feature descriptor, in non handcrafted features only data is considered to extract the feature. But non handcrafted features need larger amount of training set resulting into higher training time. It can be resolved using some supervised or semi supervised techniques with parallel computing [6]. To resolve this issue of training time, another form of deep learning called transfer learning has been found. In transfer learning, the hyper parameters of pretrained deep networks can be reused for our new data instead of training the whole network from scratch. As for deep learning networks to optimize the hyper parameters of deep architectures, requirement of labeled, diverse and high quality training data for regional languages is still an open issue. To address this issue somehow researchers have proposed data augmentation techniques.
In pattern recognition problems, yet it is undecidable that which feature set and classifier will be best Title Suppressed Due to Excessive Length 3 for particular dataset. The classifier combination techniques are less explored in script recognition area [9]. The classifier combination can be performed at feature level called early integration or at classifier(decision) level called late integration [21]. At feature level different kinds of features can be concatenated to form a new feature vector and feed into a single classifier. The new feature set can hold more information at the cost of increase in dimensionality resulting into increased training time.
At recognition level, individual classifiers designed from each feature set are combined at decision level which is most popular [9]. Many strategies exist to combine at decision level as majority voting, bagging, boosting, Borda count, product, max, sum rule, Dempster-Shafer theory, Bayesian method and ROVER framework etc [11,12,21]. The fusion of different kinds of features and classifiers may results into better accuracy, more reliable and robust results as misclassified patterns of one classifier can be recovered by some different classifiers [10].
The proposed research considers the script identification for Bilingual documents containing Gurumukhi and Latin scripts. Gurumukhi script is the base for one of India's 22 official languages i.e.
Punjabi. The official documents of Punjab state government consist of two languages: English and Punjabi. Punjabi language contains 35 characters and 10 numerals which have resemblance with various characters and numerals of English language. English language has 26 capital and small letters with 10 numerals resulting into character set of 62.

Motivation and Challenges
To process multilingual handwritten documents, some of the challenges for script identification are as follows: 1. In multilingual text, similarities and dissimilarties are present between classes as well as within classes. Hence, results from different feature sets and classifiers vary corresponding to the pattern to be recognized.
2. Due to the prevarication of similar character shapes between Punjabi and English languages, script identification at character level is a challenging problem for these scripts.
3. In handwritten data, a single character written by individual can possess various character shapes depending upon the age, mood and gender of the writer. Many forms of noise are present like ruling lines, foreground pixels, skew and slant etc. 4. Efficient feature extraction technique to capture various patterns is one of the challenge to multilingual CR research community. Each classifier has its own pros and cons for particular pattern.
To access the full potential, we need to combine the complimentary nature of different classifiers and features. This is an open issue to find an appropriate classifier combination approach.
5. Lack of benchmark datasets for multilingual CR specifically for Gurumukhi and Latin script is one of the hurdles in script identification.
6. The use of deep networks for script identification in Gurumukhi script is difficult due to lack of high quality, diverse and large training data. The influence of colonial past and globalization across the world has made English language as second official language after regional languages in India. All of the official work has been conducted using two languages: one is English and the other is state or regional language like Punjabi in Punjab state. Many public domains such as academic, health, administrative etc has made it mandatory to use Punjabi language with English language for documentation purpose. It is the main motivation behind this research and some others are as follows: 1. The popularity of digital devices like scanners, mobile phones, cameras and digital pads etc gave birth to digitization of paper documents. The official documents contain machine printed as well as handwritten data in English and Punjabi language shown in Fig. 1. To convert such multilingual documents into machine editable form, script recognition is one of the most important sub-process.
2. Script identification at page, paragraph, text line and at word level had been the major focus in past work. Less work has been reported for script recognition in handwritten text. Moreover, there are some words ( Fig. 1)  5. The potential amount of work has been done for script identification in printed text, scene and video text extraction for non-Indic scripts as compared to Indic scripts. 6. Some of the researchers have tried to combine the complementary natures of different classifiers resulting into high accuracy [9]. There main motivations for choosing ensemble classifiers over individual classifiers are statistical, representational and computational. The statistical reason arises when the training data is too small in comparison to hypothesis space. Hence, each learning algorithm produces similar results for every hypothesis generated. Here, an ensemble approach can use average of each individual classifier's output to avoid wrong classification. Another case is, when you have large amount of data available for training. Here, some algorithms like decision tree and neural network have the possibility to stuck in local optima problem. Ensemble learning overcomes it by performing many local searches simultaneously with different initial points. The representational issue arises when the hypothesis space is unable to represent the true function.
Ensemble classifier uses the weighted sum of hypothesis drawn from the original space to represent the better representational function. 7. Script identification has so many applications such as note taking in classroom, form processing and filling, document image analysis etc.
Title Suppressed Due to Excessive Length 5 8. To save the time for training deep networks, use of some already available pretrained models is highly required. 9. The main motivation behind script identification at character level is due to two main reasons as: a. official documents contain many words with characters from different scripts as shown in Fig.   1, b. Feature extraction at character level is easy and less time consuming as compared to feature extraction at word level

Contributions
To overcome the various challenges for script identification in Gurumukhi and Latin script for handwritten text, we have found the combination of features and classifiers based on transfer learning and traditional machine learning approaches. The major contributions of this research are enlisted as following: 1. In this research, a script identification module for multilingual handwritten text recognition has been designed for Gurumukhi-Latin script. It uses pre-segmented handwritten images of characters and numerals of Gurumukhi and Latin script.
2. It provides a script identification module for many document image analysis applications like extracting information from magazines, articles and image searching etc.
3. In this work, Script identification has been performed at the lowest level i.e. character level. 6. To evaluate the proposed approach, dataset of handwritten images of characters and numerals with large variations are taken.

Research Objective
The objectives of research are given below: 1. To design a script identification module to process the handwritten text written in multilingual forms like postal, railway and passport application forms, question papers etc which uses English in conjunction with Gurumukhi script.
2. To visualize the effect of combination of traditional and deep learning features for script identification in Gurumukhi and English script.
3. To obtain an optimal combination of features and classifiers corresponding to our dataset for script identification.
4. To resolve the ambiguity in handwritten character recognition due to resemblance between characters of different scripts.
The organization of rest of the paper is as follow. Section 2 leads the latest work done for script recognition. Section 3 gives the proposed methodology while section 4 gives the experimental results.
In last section 5 concludes the work and gives future directions.

Related Work
In this section we are reviewing the work done for script identification for Indic and non Indic scripts using traditional and deep learning models.

For Indic and non Indic scripts
In the past, texture analyses based approaches using traditional feature sets were the most used meth- trace and re plot and re trace (PPTRPRT) to segment bilingual offline handwritten documents. It results into optimal character segmentation rate. It extracts text regions from the document and segments it into text lines. Singh et al 2018 [9] have used one shape based and two texture based features named Elliptical, HOG, Modified log-Gabour filter transform features to perform script identification of 12 major Indian scripts. A heterogeneous classifier has been created by feeding different features sets to the same classifier with fine tuning of parameters . Further, a combination strategy has been applied at feature level as well as at decision level for the classification. Similarly, Mukhopadheya et al With the emergence of deep learning, many researchers have tried these models for script identification and found promising results. Sarkhel et al 2017 [5] have used a new deep learning based approach for OCR of many Indic scripts. In this, multi column and multi scale based Convolutional Neural Network (CNN) architecture has been employed. In this, each column independently extracts features and feeds to SVM classifier for recognition. The final result from all the columns is obtained using various decision combination schemes like simple voting, weighted voting and majority voting. Bhunia et al [4] have used the attention mechanism with CNN-LSTM(Long Short Term Memory) network for script identification in natural scene images. It was the first attempt which uses attention mechanism for script identification. Results have proved that attention mechanism has the ability to extract most relevant features of an image in a network. Local features have been identified using attention mechanism by calculating weights of image patches with higher significance. Further, global features were extracted from the last cell state of LSTM. To decide which features hold high weights, attention mechanism using dynamic weighting has been applied on local and global features.  transfer learning models like VGG-16, ResNet, AlexNEt, GoogleNet,VGG-19 etc have been evaluated.

For Gurumukhi script
For Gurumukhi scripts, the first attempt for script identification has been made by Rani et al [30] for printed dataset containing English and Gurumukhi data. Gabor and Gradient based feature extraction approaches were applied with multi class SVM classifier. They had considered both characters and numerals resulting into 4 class classification problem. The dataset consist of 19,448 characters with multi fonts having multi sized characters and received the accuracies around 98.9% for Gabor and 99.45% for Gradient based features. Kumar et al [31] have presented a multilingual script recognition system for English, Hindi and Gurumukhi scripts using pre-segmented handwritten characters.
Various features like zoning, diagonal and horizontal peak extent etc have been used with K-nn, linear SVM, and MLP classifiers. The highest accuracy of script recognition reported was 92.89% for character dataset. Dataset of 4920 samples having 4 different classes (English upper and lower case characters, Hindi and Gurumukhi characters) is used for experimentation. Pandey et al [29] have explored script identification for Gurumukhi, Hindi and Urdu scripts for handwritten data using Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) feature extractions. The script recognition accuracy of 82.70% has been reported for dataset of 961 handwritten samples. S 5 =English numerals Further, dataset of images of characters and numerals i.e. C n ew consist of two kinds real and augmented dataset i.e. C r and C s respectively. Now on this new dataset C n ew, we defines S be the set of labels. Let for C i image associated with label S j in S, we defines the classification function as: Hence, the problem can be formulated as we are given with a set C n ew of pre segmented handwritten character images of any script or language s j | S, find the most efficient and accurate classification function which maps the images to the most appropriate label. Figure 2 shows the block diagram of proposed approach.  of reservoir etc. The in-depth details of features extracted for the work has been given as following:

Gabor feature
The gabor features use the Gabor filters to differentiate between textured and non textured region of an image. These are the sinusoidal function which uses various orientation and frequencies to captures the features of an image. The filters generated corresponding to orientations and frequency captures the edges of an image as shown in Fig. 4(a). These filters capture strong response to those orientation for which direction of image structure goes. Many researchers have tried gabor filters for script identification with different orientation and frequency with good response [41,26,3]. The even and odd symmetric filters used in our work are given by Eq. 2 and Eq. 3.
where i, j are the spatial coordinates, f is the frequency of sinusoidal component of the gabor filter, σ represents the frequency of Gaussian envelop along the principal axis and θ is the angle of orientation.

HOG feature
Histogram of oriented gradient captures the key characteristics of an image by converting the pixel based representation to gradient based. These are the widely adopted features due to its ability to encode and match the strong gradients in characters. Moreover these are robust to illumination

GLCM feature
Grey level co occurrence matrix based features use the spatial relationship of pixels to characterize the texture of an image. It considers the degree of correlation between the neighboring pixel or pixels.
The other texture features provides the statistical view of an image but lacks in obtaining the spatial relationship between pixels of an image. The graycomatrix function used in GLCM creates a GLCM matrix by calculating the co occurrences of pixel having some intensity value to another pixel having intensity value too present in some specific spatial relationship. The element (i,j) of an GLCM matrix is the sum of all the instances present in input images with intensity value i for pixel of interest and intensity values j for neighboring pixel. To optimize this matrix calculation in terms of complexity, graycomatrix function performs scaling to reduce the number of intensity values in gray scale image.
The offset considered from pixel with interest to another pixel during spatial relationship is the major parameter for GLCM feature vector by default is one i.e. neighboring pixels. As the correlation occurs usually at small distances, the value of d is kept small. In GLCM, the linear distance between pixels are used with combination of orientation called θ between them. To capture the texture of an image, GLSM with single offset is not sufficient. So, here we are considering varying offsets in distance and direction resulting into 12 GLCM with 3 distances and 4 orientations. Fig. 6(a) The parameter details for each handcrafted feature used in the experiments have been represented by Table 2.

Deep learning or non handcrafted features
Handcrafted features use human designed features which need human expertise. Hence, it may loss some useful information due to biasness. To extract the high level features of an image like edges, corners etc., we need more deeper layers of filters. Recently, deep CNN has been found to be outperform as a feature extractor for many pattern recognition problems. CNN is an end-to

Combining deep learning and traditional features
The recent advances in deep learning have proved that it outperforms as feature extractor as compared to traditional features for many areas. However CNN gives less significance to the discriminative local patches of an image which are crucial elements in script identification for scripts with subtle differences. Secondly, it lacks in handling the images having arbitrary aspect ratio [14].

Transfer learning
Training efficient deep networks requires bulks of training data while collecting large training data is highly time consuming. Further, training such models on such large data takes many hours or days.
Hence, transfer learning is a way of reusing the pre-trained models on large dataset for some task, to a new dataset for some other task, both tasks belonging to same domain. The formal definition of transfer learning is: Model M has been efficiently trained for dataset D m for source task. Similarly another model N has dataset D n which is smaller then D m resulting into inefficient training of model N. Now the part of model M will be used by N to perform the tasks efficiently. In our research we are using many pretrained models for training our script identification module for bilingual datasets.

LeNet5 feature
LeNet5 is the special kind of deep CNN architecture designed for handwritten or machine printed

VGG-19
VGG network was created by Visual Geometry Group in 2014 after the success of its predecessor Alexnet with some modifications into the later. It is considered the simplest network as all the 3*3 convolutional layers are stacked on one another and two fully connected layers with 4096 nodes were  stacked on top of convolutional layers, It takes images of size 224*224 in RGB. To provide non linearity in the model for better classification, ReLu activation has been adopted, and Max pooling with stride 2 has been applied to reduce the volume size.

ResNet50
ResNet are the residual networks which allows training the extremely deep networks upto 152 layers.
The difference between ResNet and other networks is the use of skip connections between the convolutional layers relying on residual mapping instead of stacked learning. Skip connections manages the vanishing gradient problem when the depth of network increases highly. ResNet 50 has 48 convolutional layers with 1 maxpooling and 1 average pooling layer.

Voting
The voting based ensemble is the simplest one which takes output predictions (votes) of each individual classifier. The final prediction will be the one which has highest votes. The two main variants of this are: majority voting(final prediction will be the one which has more than half of votes) and weighted voting(assigns a weight to one or more best individual classifiers as it will run multiple times).
Lets take four classifiers, where C 1 predicts class 0, C 2 predicts class 1, C 3 predicts class 1 and C 4 predicts class 1 as.
Hence the majority voting based ensemble will result class 1 as output. In our work, we are using majority voting with base classifiers as SVM, Random forest and K-nn for each feature set.

Manipulating training samples
In this, various classifiers are generated corresponding to different subsets of training samples. It

iii. Cross validation
Another ensemble approach is creating the disjoint subsets of the training data. For example we are creating 5 overlapping training subsets by leaving out a disjoint subset in each training set. This procedure is named as K fold cross validation where K training sets are designed with one different subset in each fold. The ensemble designed based upon this is called cross validated committees [34].
To evaluate this approach for our work, we are considering 3 fold cross validation in each experiment.

ii. Boosting
Adaboost [33] generates multiple training sets with some initial weights assigned to each training example. In each iteration i, the learning algorithm employed on training sets produces hypothesis

Manipulating Input features
This is an open issue in machine learning to identify the best combination of particular feature set with classifier corresponding to the dataset. Hence, researchers have found an alternate to this called ensemble learning. It combines the complementary nature of different classifier and features extracted from the dataset, to boost the performance of pattern recognition. Various subsets of features were used individually and in combination to create the ensembles. In our proposed script identification approach we have three traditional and one deep learning feature extraction methods.
By manipulating these input features we have designed 15 different feature sets which are classified using various classifiers.

Experimental results and Discussion
In this research we have explored and implemented various shallow and deep learning models for script identification to find the best combination of classifiers and features. To extract deep features many pre-trained models with the help of transfer learning have been used. To train and implement the deep networks, Keras a deep learning library with tensorflow as backend has been used. To fine tune the pretarined deep network GPU based system has been used. All the experiments have been done in jupyter notebook using sklearn and keras for basic machine learning and deep learning operations respectively. System with Intel Xeon(R) processor having NVE7 graphic card with 8 GB RAM has been used for experimental work.
Many works have been reported for Gurumukhi and English script, but this is the first kind of work combining various deep learning features with traditional features using transfer learning models for script identification in Gurumukhi and English text. To explore more, many ensemble approaches have been used to evaluate in-dept effect of traditional and deep learning features. In this section along with recognition accuracies for each feature and classifier set, confusion matrix for each feature set has been designed, feature visualization has been done for comparison of different feature sets.

Experimental Dataset
There is no benchmark multilingual dataset available for Gurumukhi and English pre-segmented handwritten characters. Hence for English numerals and characters we are using Extended-MNIST

Training models
To fine tune the pre-trained models, we have used data augmentation on training and testing datasets.  has been considered. Two parameters called: regularization parameter C and parameter gamma of SVM are optimized with the help of GridsearchCV for values of gamma and C lies between (-3,0,4) and (-1,1,3) respectively. It results into the optimal values of C and gamma for each feature set. For K-nn again we have applied GridsearchCV for optimal values of K with range between (1,25). For random forest the number of estimators used are 1000. The complete details of optimized parameters have been represented in Table 4 and 5.

Results
We have conducted the exhaustive experiments to find the most significant feature and classifier combination for script identification on our dataset.  Figure 9 and 10 using confusion matrix.

Experiment 3: Combining various features for script identification
To analyze the effect of combining different kinds of features, we have designed 11 combinations using deep learning and traditional features. In this experiment we are designing multiple classifiers using each feature set with three base classifiers like SVM, RF and K-nn resulting into 33 different classifiers.

Voting based ensemble approach
To combine the complimentary nature of different classifiers designed above in multiple classifier system, we are evaluating voting based ensemble approach. Table 8 represents the cross validation

Bagging and Boosting
To further experiment with other ensemble approaches we had considered Bagging and Boosting approaches. In bagging based ensemble experiment three base classifiers linear SVM, k-nn and random forest have been used for each feature set. Table 9 represents the accuracies reported for various feature   optimization was another major challenge. Somehow gridsearchCV has helped in finding the optimal parameters but it was time consuming task. Moreover, from feature visualization (Fig. 11) we found that features extracted from deep networks are enough capable to differentiate the classes without traditional classifiers while for traditional feature sets the same was not true (Fig. 12) . Traditional feature sets further need machine learning classifiers to classify the objects.

Experiment 6: Comparative analysis with previous work
In this section we are performing the comparative analysis for the performance of various models with the proposed one. Table 6 to 10 gives the complete analysis of various results found using different feature sets with different classification techniques. The proposed approaches have been compared in Table 11 with existing state-of-the-art approaches for script identification.   t-SNE visualization approaches construct the pairwise similar and dissimilar objects using probability distributions. To find the similarity in data some kind of distance like Euclidean distance has been used. Fig 11 shows the features learned using various pre trained models. As with Vgg19 and ResNet50 features, a clear separation between the various classes can be found. It depicts the high accuracy achieved by these models during classification. On the other hand in LeNet5, we see that data of some classes overlaps with other classes showing the misclassification of data. Class 3 and 4 has clear separation from class 0,1,2 which shows high classification while classes 0,1,2 have mixing data showing the misclassification.

Discussion
We have evaluated the script identification in Gurumukhi and English script for numeral as well as character dataset with 5 different classes. Many handcrafted and deep learning features have been used to find the best match with efficient accuracy. We proposed the best combination of feature sets and classifiers for script identification. We observed that deep features extracted from pretrained models have performed much better as compared to traditional feature for classification. The use of transfer learning has saved a lot of time for feature extraction using deep models. The pretrained models was trained on imagenet dataset which were further fine tuned on handwritten character dataset. Another high point of deep features was parameter learning during training of network unlike in handcrafted features. Real data are non linear in nature and deep networks are best in generalizing such non linear data due to the use of activation function which is not possible in handcrafted features. Further, the results in the form of confusion matrix as well as from feature visualization graphs depicts that, the miss classification mostly occurs in between upper case English characters as well as lower case English characters. But the major identification between English and Gurumukhi is satisfiable.
Results in the form of table reveal that boosting based ensemble methods are performing worse than other approaches. For our datasets, Adaboost has reported lower accuracy even using the weakest base classifier i.e. decision tree compared to voting and bagging based classifiers. Adaboost reports an improvement of 20.80% and 26.57% with random forest as base classifier and combination of traditional and deep learning features(LeNet and HOG) over decision tree and SVM as base classifiers respectively.
For bagging based ensemble approach K-NN as base classifier has performed well almost for each feature set as compared to other base classifier SVM and random forest (table 9). Here, K-NN as base classifier had performed 7.99% better then SVM and 1.06% better then random forest. Combination of deep learning and traditional features(HOG) has given highest results with K-nn. For SVM and random forest as base classifier in bagging works good with combination of traditional feature extraction methods (HOG, GLCM, Gabor and GLCM, HOG respectively).

36
Sukhandeep Kaur et al. For multi classifier system voting based ensemble with SVM, random forest and k-nn as base classifier has outperformed compared to all other ensemble classifiers in this work. Combination of traditional feature extraction approaches i.e. Gabor and HOG with voting based ensemble approach has reported an improvement of 1.3%, 1.46% and 2.79% over individual classifier SVM, random forest and k-nn respectively.

Conclusion and Future directions
In this paper we have performed script recognition at character level for Bilingual handwritten doc-

Conflict of interest
The authors declare that they have no conflict of interest.
Informed Consent Informed consent was obtained from all individual participants included in the study.

Authorship Contributions
Authors contribution All authors contributed to the study, conception, data collection and design of the research. The first draft of manuscript was written by SK and all other authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.