Infrared Small Object Segmentation (ISOS) faces challenges in isolating small and faint objects from infrared images due to their limited texture details and small spatial presence. Existing deep learning methods have shown promise but often assume that these networks can effectively map small objects to deep semantic features. This mapping, however, may not be accurately learned by the model due to the excessive downsampling, which may cause the loss of high-level semantic representations essential for accurately identifying small objects in infrared images. To address this issue, this study introduces a novel learning paradigm, DEAR, designed to reinforce deep-level features of infrared small objects from three perspectives. Specifically, the Multi-Scale Nested (MSN) module merges shallow object features with deep semantic features through iterative interactions. The Central Difference Convolution (CDC) module enhances semantic contrast between small objects and their backgrounds, minimizing information loss during downsampling. Additionally, the Orthogonal Dynamic Fusion (ODF) module boosts the representation of deep object features through self-supervised learning. Experimental evaluations on the NUAA-SIRST and NUDT-SIRST datasets show that DEAR significantly outperforms 16 state-of-the-art methods in ISOS, demonstrating its effectiveness. The code is publicly available online at \href{https://github.com/nieyihe888/DEFER}{https://github.com/nieyihe888/DEAR}.