For the uncertainty and complexity in object decision making and the differences of decision makers ' reliabilities, an object decision making method based on deep learning theory is proposed. However, traditional deep learning approaches optimize the parameters in an "end-to-end" mode by annotating large amounts of data to propagate the errors backwards. The learning method could be considered to be as a "black box", which is weak in explainability. Explainability refers to an algorithm that gives a clear summary of a particular task and connects it to defined principles or principles in the human world. This paper proposes an explainable attention model consisting of channel attention module and spatial attention module. The proposed module derives attention graphs from channel dimension and spatial dimension respectively, then the input features are selectively learned according to the importance of the features. For different channels, the higher the weight, the higher the correlation which required more attention. The main function of spatial attention is to capture the most informative part in the local feature graph, which is a supplement to channel attention. We evaluate our proposed module based on the ImageNet-1K and Cifar-100 respectively. Experimental results show that our algorithm is superior in both accuracy and robustness compared with the state of the arts.