Intention recognition and slot filling tasks are two important tasks in oral comprehension, and they are closely related. The study of joint models for both tasks is a current research hotspot. However, most current joint models are based on a single information flow model, only considering the impact of intention on slot information and ignoring the information flow from slot to intention. In response to this issue, this article proposes a joint model based on graph neural networks for multi-intention recognition and slot filling to achieve bidirectional information flow modeling. Among them, BERT is used as a shared encoder to improve the quality of semantic feature extraction. An adaptive graph convolutional network is proposed to extract slot information in intention recognition tasks, facilitating information exchange from slot to intention. In the slot filling task, feature fusion and a graph attention mechanism are employed to aggregate intention information, achieving information flow from intention to slot. Through experimental verification, the bidirectional information exchange effect is evident. Compared with the current GL-GIN model, the total accuracy of the proposed model has been improved by 1.7% and 2.1% on two common datasets, respectively.