Aiming at the requirements of high accuracy, lightweight and real-time performance of the panoptic driving perception system, this paper proposes an efficient multi-task network(YOLOMH). The network uses a shared encoder and three independent decoding heads to simultaneously complete the three major panoptic driving perception tasks of traffic object detection, road drivable area segmentation and road lane segmentation. Thanks to our innovative design of the YOLOMH network structure: first, we design an appropriate information input structure based on the differences information requirements between different tasks, and secondly, we propose a Hybrid Deep Atrous Spatial Pyramid Pooling(HDASPP) module to efficiently complete the feature fusion work of the neck network, and finally effective approaches such as anchor-free detection head and Depthwise Separable Convolution(DCN) are introduced into the network, making the network more efficient while being lightweight. Experimental results show that our model achieves competitive results in both accuracy and speed on the challenging BDD100K dataset, especially in terms of inference speed, The model’s inference speed on NVIDIA TESLA V100 is as high as 107 Frames Per Second(FPS), far exceeding the 49 FPS of the YOLOP network under the same experimental settings. The final visualization shows that YOLOMH can excellently complete the panoptic driving perception tasks, which is conducive to the safe and reliable autonomous driving of autonomous vehicles.