The necessity of a substantial dataset for training image quality assessment (IQA) models complicates their learning process, hindering generalizability. Under the differential mean opinion score (DMOS) evaluation standard, we propose an IQA model employing convolutional neural networks (CNNs) for feature extraction, particularly focusing on the intricate relationship between an image's complete response and its distortion information. Our model employs U-Net to extract distortion details, followed by CNN for feature extraction and attention detection on both distorted and undistorted information. This process yields local quality scores and attention weights, contributing to the overall image quality score through weighted summation. By prioritizing the extraction of image distortion information, our model reduces interference from redundant data, thus simplifying the training complexity. To further streamline training, we introduce the concept of \emph{priori distortion scale} (PDS) as an estimation parameter for U-Net iteration steps, eliminating the need for additional parameters. Evaluation on five datasets, including synthetic and real distortions, demonstrates the proposed model's superior performance compared to state-of-the-art IQA algorithms across various metrics. The comprehensive experimental results validate the efficacy of our approach.