Due to the scarcity of nighttime semantic segmentation datasets and the high demand for network models, the development of semantic segmentation of nighttime scenes is still very slow. This paper proposes a new network model, ContourNet, which can model images in multiple stages, including local features with rich superficial appearance information, global features with rich deep semantic information, and intermediate layer features. At the same time, this paper uses multi head decoders to fuse different levels of features. In this paper, a separate contour network module for processing object contours is designed for night images with imperceptible texture and color information. This network module is parallel to the semantic network module to achieve accurate prediction of object contours. The performance is prominent, especially for objects that are far away, small objects, or objects with high contour continuity. A large number of experiments demonstrates that the ContourNet proposed in this paper can significantly improve the semantic segmentation ability of existing models for nighttime images, and can also improve the semantic segmentation accuracy of daytime images to a certain extent, with good generalization ability. Specifically, after adding the contour module in this article, MIoU has increased by 5.1% on the night dataset Rebecca; MIoU has increased by 2.5% on the daytime dataset CamVid.