This paper introduces a novel method for satellite colour imagery retrieval, based on an Adaptive Gaussian Markov Random Field (AGMRF) model with the Bayes-driven Deep Convolutional Neural Network (AGMRF-BDCNN). The given input imagery is segregated into the structure, microstructure, and texture components, and the AGMRF-driven features and statistical features are extracted from the segregated components and are formulated as a feature vector of the query imagery. Cosine direction and Bhattacharyya distance measures are deployed to match the feature vector with the feature vector of the feature vector database. If the query imagery features match the feature-vector database's features, then the reference imagery in the database is marked and indexed. The indexed imageries are retrieved. Three different benchmark datasets, SceneSat, PatternNet, and UC Merced, have been used to validate the proposed AGMRF-BDCNN method. For the SceneSat dataset, the AGMRF-BDCNN method results in 0.2319 scores for ANMRR and 0.7156 scores for mAP; for the UC Merced dataset, it yields 0.2316 scores for ANMRR and 0.7816 scores for mAP; for PatternNet dataset, it achieves 0.2405 scores for ANMRR and 0.6979 scores for mAP. The obtained results are comparable to state-of-the-art methods.