The current methods used to annotate aerial imagery from unmanned aerial vehicles (UAVs) often have limitations when it comes to scenarios with multiple resolutions. These scenarios can include different scales, challenging weather conditions, or low light situations, where traditional annotation techniques may not be effective. To overcome this challenge, this research paper proposes an innovative approach to data annotation specifically designed for multi-resolution UAV aerial imagery. Instead of using standard image enhancement methods, this approach incorporates multiple resolutions to ensure consistent and high-quality annotation across various UAV image conditions. By utilizing enhancement, generation, and discrimination networks, a new method based on StyleGAN is introduced. This method focuses on controlling image enhancement and incorporates a regularization term error to guide the generative network towards convergence. The experimental results demonstrate a significant enhancement in annotation accuracy and robustness compared to existing models, particularly in complex multi-resolution environments.