Speech enhancement for hearing aids is one of the active research domain in the field of artificial intelligence. Recent developments in multi-channel speech enhancement include Augmented Reality in hearing aids. With all these progress that is actively moving forward for the evolution of hearing aids, the author proposes the use of Quantum Convolution Neural Networks for the purpose of source-separationin hearing aids. When compared to speech enhancement for smartphones andother devices, speech enhancement for hearing aids is challenging because of limited signal processing resources and very slow evolution of electrical components and electronic circuits involved in hearing aids. Currently hearing aids are in their third generation of evolution wherein the hearing aids are connected to personalassistant on cloud which answers the questions that wearer asks, monitors health quotient of the individual, streams music and phone calls. However there is no significant improvement in the field of speech enhancement in hearing aids either from the electrical and electronic components or from the cloud/ mobile application supported hearing aids. Thus the author here worked on speech enhancement forhearing aids using Quantum Convolutional Neural Network (QCNN) and proposed the same for speech enhancement in hearing aids for cloud/ mobile app supported hearing aids, with the consent of data collection from the hearing impaired population. Here the author compares the performance of the classic blind source separation with QCNN and obtained 98% accuracy in model training almost 75% accuracy in source-separation from real-time audio. The author hopes that QCNN may be used in future of speech enhancement for hearing aids.