The small-object detection challenge has long limited the advanced development of deep learning-based detection models. The downsampling of visual models leads to a severe loss of spatial information of small objects causing a great increase in the difficulty of models to capture the location of small ones. Inspired by the logic of human visual perception behavior, we found that the main limitation of small object detection is the position regression rather than category differentiation. Therefore, we first introduce coordinate features to perform multi-scale spatial information perception and element-level width and height independent coordinate encoding of image features in anticipation of easing the difficulty of small-object detection by convolutional neural network models. Secondly, we design a lightweight one-stage detector for the real-time small-object detection task based on the coordinate feature scheme and dense prediction architecture, and different lightweight cross-stage locally connected fusion attention methods are also proposed for feature maps of different scales suitable for use, including GSCSP-S and GSCSP-A. Finally, we take only 7 million parameters in VisDrone2019 UAV detection benchmark task accomplishes a test performance of 32.3% mAP, which provides a superior accuracy-speed tradeoff compared to current state-of-the-art one-stage real-time detectors of the same size.