Detecting ships with high precision in low-light conditions poses a critical challenge to maritime safety. Under such conditions, ship boundaries become blurry, and overall contrast diminishes, resulting in missed and false detections in detection algorithms. Many existing solutions rely on applying deep learning networks for low-light enhancement. However, these methods often lack adequate mechanisms to integrate shallow spatial and deep semantic information within the network. This paper proposes two improvement strategies for video ship detection. In the first scheme, for the shallow layer of the backbone network, we design a block to enhance the model's attention to boundary information(SABlock). SABlock enhances the model's attention to boundary information, allowing it to determine the ship's position and shape accurately. In the second scheme, for the deep layers of the backbone network, we design a block to preserve detailed features by increasing contrast (DPBlock). DPBlock aims to preserve detailed features by increasing contrast, introducing differences that retain deep-level details. This enables the model to identify the ship's category better. We present a shallow-deep hybrid feature fusion mechanism (S-D HFFM) to integrate shallow spatial and deep semantic information effectively. SABlock and DPBlock are strategically embedded in the mainstream network to form a feature extraction chain. This chain hierarchically incorporates two types of information. We validate these strategies through experiments conducted on YOLOv5s and YOLOv8s, demonstrating the feasibility of our approach. Experimental results show that S-D HFFM significantly enhances the model's ability to recognize ship positions, shapes, and categories under low-light conditions.