A good image editing model should learn to map relationships between styles from different domains, meet the high quality and diversity of the generated images, and be highly scalable across different domains. Also, given the importance of multi-device deployment, especially model deployment on lightweight devices, lightweight optimization of the model is an essential and critical task. Based on these key points, a new approach to optimize and improve the existing base model, namely OMGD-StarGAN, is proposed by combining the PatchGAN discriminator and DynamicD dynamic training strategy, ResNet style generator, and modulated convolution. A variety of experiments are compared, and the experimental results show that the proposed model reduces the computational cost while improving the quality and diversity of the generated images.