With the advent of deep learning, scene text detection has gained significant prominence, finding applications in diverse fields such as autonomous driving, and scene text translation. Existing frameworks often encode text candidates independently, employing sequential deep convolutional methods (e.g., ResNet, VGGNet). These methods grapple with several challenges, including the variability of text shapes, the presence of intricate backgrounds, and limitations in retaining relatively long contextual information. In contrast, our proposed network, denominated as the Feature Enhancement Network (FENet), adeptly addresses arbitrary direction text detection through the integration of attention mechanisms and Bi-directional Long Short-Term Memory (BLSTM). Our method boasts three pivotal attributes: (i) Recurrent strengthening of feature maps via the amalgamation of an attention mechanism-based Convolutional Block Attention Module (CBAM) and a BLSTM-based sequence feature extraction module. (ii) Modification of the constituent elements of the bottleneck layer within the ResNet architecture to facilitate micro-batch training. (iii) Augmentation of network learnability through the incorporation of residual network structures within the BLSTM-based sequence feature extraction module. We conducted extensive experiments on five publicly available datasets, namely ICDAR 2013, ICDAR 2015, MSRA-TD500, SCUT-CTW1500, and ICDAR2017-MLT. The experimental results demonstrate the efficacy of our proposed method. We will release code to facilitate community research at the website: https://github.com/lyy0117.