Transformers have shown a significant advantage over CNNs in modeling long-range dependencies, which has led to their increased attention in semantic segmentation tasks. In the current work, a novel semantic segmentation model-LACTNet is introduced, which synergistically combines Transformer and CNN architectures for real-time processing. LACTNet is designed with a lightweight Transformer featuring a gated convolutional feedforward network, which is combined with CNNs to compensate for their respective shortcomings. LACTNet designs a Lightweight Average Feature Bottleneck (LAFB) module that effectively guides spatial detail information within the features, thus enhancing segmentation accuracy. To address the loss of spatial features in the decoder, a long skip-connection approach is employed through the designed Feature Fusion Enhancement Module (FFEM), which boosts both the integrity of spatial features and the feature interaction capability in the decoder. Testing on both the Cityscapes and CamVid datasets confirms that LACTNet attains mIoU scores of 74.8% and 71.8%, respectively, while also maintaining real-time frame rates of 90 FPS and 126 FPS.