In the field of image reconstruction and defuzzification, processing of images using frequency domain features of images is becoming popular. In the field of medical image segmentation, however, there are relatively few studies on processing using frequency domain features. Inspired by this point, this paper proposes a method of wavelet transform-based image enhancement module, which aims to separate the high-frequency and low-frequency features in the original image, and then the image is enhanced with features to improve the segmentation accuracy of the model. A single-branch network like Unet will mix high-frequency and low-frequency information during the learning process, resulting in the discarding of some useful information, to avoid this, this paper designs a novel two-branch network named YNet, which employs two encoders that are used to learn the high-frequency and low-frequency information of the image, respectively. To effectively fuse the features learned by the two encoders, this paper introduces a frequency feature fusion module based on the attention mechanism to fuse the features of the two branches, and this fusion method effectively alleviates the problem of information loss brought by the traditional feature fusion. Then it is input into the same decoder for processing. The results show that this study achieved excellent segmentation accuracy on the Kvasir-SEG, CVC-ClinicDB, ISIC2018, and DSB2018 datasets, as demonstrated by the mDice, mIoU, mPrecision, and mRecall metrics. This is of great significance for the medical image segmentation field regarding the effective integration of deep learning and machine learning.