Nowadays, with the rapid expansion of social media as a means of quick communication, real-time disaster information is widely disseminated through these platforms. Determining which real-time and multi-modal disaster information can effectively support humanitarian aid has become a major challenge. In this paper, we propose novel end-to-end model, named GCN-based Semi-supervised Multi-modal Domain Adaptation (GSMDA), which consists of three essential modules: the GCN-based feature extraction module, the attention-based fusion module, and the MMD domain adaptation module. The GCN-based feature extraction module integrates text and image representations through GCNs, while the attention-based fusion module then merges these multi-modal representations using an attention mechanism. Finally, the MMD domain adaptation module is utilized to alleviate the dependence of GSMDA on source domain events by computing the maximum mean discrepancy across domains. our experiments results demonstrate that GSMDA outperforms the current state-of-the-art models in terms of performance and stability.