In recent years, the attention mechanism has made great contributions to therepresentation ability of deep convolutional neural networks, especially the atten-tion module in convolutional neural networks. Based on simple mathematicalprinciples, this paper proposes an operation unit called Map-and-Acquisition(MA) to compute the attention of different feature vectors in convolutional neu-ral networks. Inspired by the convolution operation, MA extends the convolutionoperator to integrate the characteristics of the space or channel features in thelocal receptive field to the global space or channel, and only operates on onedimension of the image feature, avoiding a series of problems caused by thecoupling of space and channel dimensions. The feature response of the chan-nel or space domain is adaptively adjusted through the weighting and summingoperator to better extract the high-level features of the image. MA is concep-tually simple and effective. Attention mechanisms can be easily computed inboth channel and spatial domains, and this module can be flexibly inserted intothe backbone of a general-purpose deep convolutional neural network, and thenumber of additional parameters is very small. Our method is verified by imageclassification, object detection, and instance segmentation on the ImageNet-1K and COCO 2017 datasets, and is also evaluated on the small-sample imageclassification task on the dataset mini-Imagenet. Experimental results showthat our method can significantly improve the performance of vision tasks.