Event extraction plays a crucial role in natural language processing (NLP), facilitating the transformation of unstructured text into structured representations. This conversion significantly enhances the performance of various applications, such as automated question answering and information retrieval systems. However, traditional event extraction methodologies often encounter challenges stemming from limited datasets, imbalanced sample distributions, the necessity for extra resources to annotate large datasets, and the potential for data quality degradation during the augmentation process. To surmount these obstacles, this study introduces an innovative self-data augmentation strategy that leverages a single large language model (LLM) to concurrently perform data augmentation and event extraction. By dynamically assessing and refining the quality of generated samples, this approach mitigates the inclusion of noisy data, ultimately bolstering the model's performance. Demonstrable enhancements in precision, recall, and F1 scores across various model configurations underscore the efficacy of this strategy in managing small and imbalanced datasets. Furthermore, the incorporation of Logical Thoughts for Self-Data Augmentation (LoTSA) ensures the superior quality of augmented data, culminating in more accurate and reliable extraction outcomes.