Although Convolutional Neural Networks (CNN) outperform the classical models in a wide range of Machine Vision applications, their restricted interpretability and their lack of comprehensibility inreasoning, generates many problems such as security, reliability and safety. Consequently, there is a growing need on research to improve explainability and address their limitations. In this paper, we propose a concept-based method, called Concept-Aware Explainability (CAE) to provide a verbal explanation for the predictions of pre-trained CNN models. A new measure, called detection score mean, is introduced to quantify the relationship between the filters of the model and a set of pre-defined concepts. Based on the detection score mean values, we define sorted lists of Concept-Aware Filters (CAF) and Filter-Activating Concepts (FAC). These lists are used to generate explainability reports, where we can explain, analyze, and compare models in terms of the concepts embedded in the image. The proposed explainability method is compared to the state-of-the-art methods to explain Resnet18 and VGG16 models, pre-trained on ImageNet and Places365-Standard datasets. Two popular metrics, namely, number of unique detectors and numberof detecting filters, are used to make a quantitative comparison. Superior performances are observed for the suggested CAE, when compared to Network Dissection (NetDis) [1] and Net2Vec [2] methods.