In many healthcare applications, datasets for classification may be highly imbalanced due to the rare occurrence of target events such as disease onset. The SMOTE (Synthetic Minority Over-sampling Technique) algorithm has been developed as an effective resampling method for imbalanced data classification by oversampling samples from the minority class. However, samples generated by SMOTE may be ambiguous, low-quality and non-separable with the majority class. To enhance the quality of generated samples, we proposed a novel self-inspected adaptive SMOTE (SASMOTE) model that leverages an adaptive nearest neighborhood selection algorithm to identify the “visible” nearest neighbors, which are used to generate samples precisely falling into the minority class. To further enhance the quality of generated samples, an uncertainty elimination via self-inspection approach is introduced in the proposed SASMOTE model, which filters out the synthetic data with high uncertainty and that are inseparable with the majority class. The effectiveness of the proposed algorithm is demonstrated through two real-world case studies in healthcare, including risk gene discovery and fatal congenital heart disease prediction, and compared with existing SMOTE-based algorithms. By generating the higher quality synthetic samples, the proposed algorithm is able to help achieve better prediction performance (in terms of F1 score) compared to the other methods, which is promising to enhance the usability of machine learning models on highly imbalanced healthcare data.