EEG involves recording electrical activity generated by the brain through electrodes placed on the scalp. Imagined speech classification has emerged as an essential area of research in brain-computer interfaces (BCIs). Despite significant advances, accurately classifying imagined speech signals remains challenging due to their complex and non-stationary nature. Existing approaches often struggle with low signal-to-noise ratios and high inter-subject variability. A novel method named imagined speech functional connectivity graph (ISFCG) is implemented to deal with these issues. The functional connectivity graphs capture the complex relationships between brain regions during imagined speech tasks. These graphs are then used to extract features that serve as inputs to various machine-learning models. The ISFCG addresses the limitations of traditional methods by providing a more robust representation of imagined speech signals. Also, a convolutional neural network (CNN) is proposed to learn features from these complex graphs, leading to improved classification accuracy. Experimental results on a benchmark dataset demonstrate the effectiveness of our method.