In recent years, convolutional neural networks (CNNs) have seen rapid advancements, leading to the proposal of numerous lightweight image super-resolution techniques tailored for deployment on edge devices. This paper examines the information distillation mechanism and the vast-receptive-field attention mechanism utilized in lightweight super-resolution. Additionally, it introduces a new network structure named the vast-receptive-field feature distillation network, named VFDN, which effectively enhances inference speed and reduces GPU memory consumption. The receptive field of the attention block is expanded, and the utilization of large dense convolution kernels is substituted with depth-wise separable convolutions. Meanwhile, we modify the reconstruction block to obtain better reconstruction quality and introduce a Fourier transform-based loss function that emphasizes the frequency domain information of the input image. Experiments show that the designed VFDN achieves comparable results to RFDN, but the parameters are only 307K(55.81\(%\) of RFDN), which is advantageous for deployment on edge devices.