Diabetic Retinopathy (DR) is the leading cause of blindness among adults and has no visible symptoms. Early detection is the key to prevent vision loss. Computer-Aided Diagnosis using Convolutional Neural Networks have recently gained momentum for DR detection as the cost can be significantly reduced while making the diagnosis more accessible. Even though these models give high accuracy, confidence in the diagnosis remains a mythical goal. In our opinion, the way to make the model’s diagnosis reliable is to enhance the trust in its predictions. In this work, we present a methodology to strengthen confidence in DR diagnosis using state of the art deep learning techniques by extending the pipeline with model uncertainty and explainability. Towards this goal, along with the model’s prediction we produce uncertainty scores and explainability maps to improve the users’ confidence in the diagnosis. Further, we attempt to generalise CNN models by exploring ensembling techniques such as bagging to improve performance of stand-alone models on external datasets i.e.-datasets other than the one that was used for training the model.