Paratuberculosis, also known as Johne's disease, is a systemic, chronic progressive, and often serious disease of ruminants. The disease is caused by an acid-fast organism Mycobacterium avium subsp. paratuberculosis (Brown et al., 2007; Castellanos et al., 2012; Windsor, 2014). Clinical signs generally include watery diarrhea and weight loss. But the disease may be insidious in small ruminants. Affected animals characteristically exhibit progressive weight loss, soft faces, and exercise intolerance (Windsor, 2014). The effects of the subclinical disease are less defined but may include decreased milk production; they also cause large economic losses in the livestock industry (Juste and Casal, 1993; Hutchinson, 1996).
Deep Learning, a subset of machine learning, is an artificial intelligence method that uses multilayered artificial neural networks (ANN) in field such as object detection, speech recognition, natural language processing (NLP) etc. Unlike traditional machine learning methods that use coded rules, deep learning methods enable learning automatically from the symbols of data belonging to pictures, videos, sounds, and texts. Since they are flexible, they can also learn from raw images or text data, and their estimation accuracy can increase depending on the size of the data. However, deep learning carries out the learning process through examples (Kaya, 2019).
Using deep learning models means using black box models. In such studies with black box models, people/doctors do not understand the limitations of the models and cannot answer the question of how they achieve good diagnostic results. It has been determined that it is necessary to use additional solutions to support the interpretability of traditional machine learning models or to make deep learning models explainable. To produce 'visual explanations' for decisions made from Convolutional Neural Network (CNN) based models, a technique is essential that makes them more transparent and explainable.
While Deep Learning models provide superior performance, the inability to decompose them into individual heuristic components makes them difficult to interpret. As a result, when today's smart systems fail, they often fail without warning or explanation, leaving the user looking at inconsistent output and wondering why the system is doing what. This is why interpretability is important. Clearly, to build trust in intelligent systems and move towards their meaningful integration into our daily lives, we need to build 'transparent' models capable of explaining what they predict and why. Generally speaking, this transparency and ability to explain is useful at every stage of Artificial Intelligence (AI) evolution. Intuitive and graphical indicators of how CNN makes decisions help users to have confidence in the model.
Class activation maps are a technique for obtaining distinctive image regions used to identify a particular class in an image. In other words, you will be able to see which parts your trained model focuses on when classifying the image. Compared to other CAM Models, the distinctive regions highlighted by the Grad-CAM procedure are always focused where they are needed in the picture and never the background of the images. The basic idea behind Grad-CAM is to take advantage of the spatial information preserved through convolution layers to understand which parts of an input image are important for a classification decision.
The following set of methods was developed based on convolutional neural networks or a combination of CNN with handcrafted features. As the training of convolutional neural network models is often complex and requires a more extensive training set, initial studies (Kashif et al., 2016; Xing et al., 2016; Romo-Bucheli et al., 2016; Wang et al., 2016) considered CNN to be biologically interpretable focused on integrating construction with features, and these models have shown perfect performance in tackling the touch core segmentation problem.
Deep learning (DL) methods have also been studied to normalize histology images. Color normalization is an important field of research in histopathological image analysis. Janowczyk et al. (2016) presented a method for the stain normalization of histopathology images stained with H&E based on deep sparse autoencoders.
Sethi et al. (2016) emphasized the importance of color normalization for CNN-based tissue classification in H&E dyed images. Proposing a hybrid method based on persistent homology, Qaiser et al. (2019) were able to capture the degree of spatial connectivity between the touching cores, which is quite difficult to obtain using CNN models (Sabour et al., 2017).
For a problem that the machine is asked to solve, it is sufficient to give a model that enables the machine to find a solution to the problem by evaluating the examples instead of using rule sets. In order to correct the error in the solution of the problem, a simple command list is given, and the machine is expected to perform the learning process. Model selection is effective in solving the problem. The model to be determined in accordance with the problem contributes more to the solution of the problem (Buduma, 2015).
As a result of the literature review, it has been seen that there is no similar study using deep learning to diagnose paratuberculosis. Digital imaging, that is, an auxiliary diagnostic tool, has not been developed for the diagnosis of paratuberculosis. Hence, to the best of our knowledge, the present study is the first study in this field. We tried many deep learning models for this study, and we present the models with the highest performance.
The aim of this study is to present an approach to histopathological diagnosis of paratuberculosis in sheep with image classification and explainable artificial intelligence with grad-cam model.