Although deep neural networks (DNNs) are applied in various fields owing to their remarkable performance, recent studies have reported that DNN models are vulnerable to backdoor attacks. They generated backdoored images by adding a backdoor trigger in original training images that activates the backdoor attack. However, most of previous attack methods are noticeable, not natural to the human eye, and easily detected by certain defense methods. In this study, we propose an image synthesis-based backdoor attack approach to resolve these limitations. To overcome the aforementioned limitations, we set a conditional facial region such as hair, eyes, or mouth as a trigger and modified that region using an image synthesis technique that replace the region of original image with the region of target image. Consequently, we achieved an attack success rate of up to 88.37% using 20% of the synthesized backdoored images injected in the training dataset, while maintaining the model accuracy for clean images. Moreover, we analyzed the advantages of the proposed approach by applying image transformation, visualization of activation regions on DNN models, and human tests. In addition to its implementation in both label flipping and clean label attack scenarios, the proposed method can be utilized as an attack approach to threaten security in the face classification task.