Deep Q Network (DQN) plays a crucial role in path planning for autonomous mobile robots. The traditional DQN algorithm has problems such as slow convergence speed and being prone to falling into local optima in path planning tasks. To address these issues, this paper proposes an improved DQN algorithm for path planning of autonomous mobile robots. Firstly, the reward function is improved based on heading angle error and distance, and the DHD (distance- heading angle- direction) reward function is designed by combining the motion direction to improve the performance of the algorithm and avoid local optima. Secondly, a weight-sampling learning strategy is designed to increase the utilization rate of training samples and expedite the algorithm's convergence speed. Finally, through comparative simulation experiments, it is verified that the improved DQN algorithm has better performance than traditional DQN and prioritized experience replay.