Over the years, federated learning-based intrusion detection has attracted attention because it preserves data privacy and improves the detection capabilities of local models. However, the majority of existing methods in this domain are tailored for homogeneous models. Given the impact of factors such as hardware disparities and business requirements, local models often exhibit heterogeneity, which significantly restricts the development and application of federated learning for intrusion detection. Therefore, to address the challenge posed by model heterogeneous federated learning-based intrusion detection, this paper proposes a novel framework called zero-data knowledge distillation of federated learning for intrusion detection (ZDKD-FLID). This framework not only effectively addresses the issue of model heterogeneity but also operates without relying on a public dataset. Firstly, on the node side, the prediction model and local model perform knowledge distillation learning. Secondly, on the server side, the prediction model is selected and aggregated to generate a global prediction model. Additionally, a generator optimized with particle swarm optimization is employed for generative adversarial learning, enabling the generation of samples. Finally, the local model is trained using samples containing knowledge from other heterogeneous models, effectively improving the accuracy of intrusion detection. To validate its efficacy, ZDKD-FLID is compared with state-of-the-art algorithms on the CICIDS-2017 and UNSW-NB15 dataset. Consequently, ZDKD-FLID demonstrates superior performance compared to all the other considered algorithms.