Class imbalance has been widely accepted as a significant factor that negatively impacts a machine learning classifier's performance. One of the techniques to avoid this problem is to balance the data distribution by using sampling-based approaches, in which synthetic data is generated using the probability distribution of classes. However, this process is sensitive to the presence of noise in the data, in which the boundaries between the majority class and the minority class are blurred. Such phenomena shift the algorithm's decision boundary away from an ideal outcome. In this work, we propose a framework that tackles two primary objectives: first, to address class distribution imbalance by synthetically increasing the data of a minority class; and second, to devise an efficient noise reduction technique that improves the class balance algorithm. The proposed framework focuses its capability towards removing noisy elements from the majority class, and by doing so, provides more accurate information to the subsequent synthetic data generator algorithm. Experimental results show that our framework is capable of improving the prediction accuracy of eight classifiers from 7.78% up to 67.45% for eleven datasets tested.