Convolutional neural network models, as well as the training samples necessary, have grown in size in recent years. Mixed Sample Data Augmentation is provided to further improve the model's performance, and it has yielded good results.Mixed Sample Data Augmentation allows the network to generalize more effectively and improves the baseline performance of the model. The mixed sample strategies proposed so far can be broadly classified into interpolation and masking. However, interpolation-based strategies distort the data distribution, while masking-based strategies can obscure too much information. Although Mixed Sample Data Augmentation has been proven to be a viable technique for boosting deep convolutional model baseline performance, generalization ability, and robustness, there is still room for improvement in terms of image local consistency and data distribution.In this research, we present a new Mixed Sample Data Augmentation that uses random masking to increase the number of image masks while retaining the data distribution and high-frequency filtering to sharpen the images in order to emphasize recognition regions. Our experiments on CIFAR-10, CIFAR-100, Fashion-MNIST, SVHN, and Tiny-ImageNet datasets show that the LMix improves the generalization ability of state-of-the-art neural network architectures. And our method enhances the robustness of adversarial samples.