While approaches to shaping rewards in solving sparse reward tasks are beneficial, their successful application usually does not consider the problem of environment size, which remains a challenging task in larger-scale environments where the state space is too large. For example, in tasks where the intelligence must achieve a certain goal state, it is often difficult for the intelligence to reach the goal when the environment is large enough to be rewarded and learn many useless episode experiences, resulting in longer training times and compromised performance. We introduce a simple and effective model-free approach to learning to shape the distance-to-goal reward for failure in tasks that require successful goal attainment. Our approach introduces a dynamic range on the goal, with an associated reward function set according to the range to guide the correct goal direction. This approach is much less affected by state space oversizing in larger-scale environments and can solve sparse reward tasks quickly and efficiently. In our experiments, we demonstrate that our approach successfully solves goal-directed tasks of different sizes, but in these tasks, other now popular reinforcement learning algorithms, which are based on the rewards of naive distance shaping failure rollout, are more influenced by the size of the environment, resulting in poorer performance in larger scale environments.