Physics-Informed Neural Networks (PINNs) are becoming more and more popular for solving partial differential equations (PDEs) of nonlinear systems. Due to the advantage of low computational cost, the gradient descent algorithm coupled with the weighted objectives method is usually used to optimize loss functions in the PINN training. However, the interaction mechanisms between the gradients of loss functions are not fully clarified, leading to poor performances in loss functions optimization such as low efficiency and non-convergence. For this, an adaptive gradient descent algorithm (AGDA) is proposed, and then validated by analytical PDEs and Navier-Stokes equations. Firstly, the interaction mechanisms of loss functions gradients in the PINN training based on the traditional Adam optimizer are analyzed. The main factors responsible for the poor performances of the Adam optimizer are identified. Based on the interaction mechanisms, a new AGDA optimizer is developed for the PINN training by two steps: 1) balancing the magnitude difference of loss functions gradients; 2) eliminating the gradient directions conflict. Then, three types of PDEs (elliptic, hyperbolic and parabolic) and three twodimensional incompressible Navier-Stokes equations are selected to validate the proposed algorithm. It is found that the AGDA optimizer can achieve higher training accuracy with fewer iterations, and can improve the training efficiency of the analytical PDEs by 5~6 times and the multi-parameter NavierStokes equations by 1.4 times. Also, the AGDA optimizer is more robust as it performs better than the traditional Adam optimizer in dealing with the parabolic heat conduction equation and the unsteady Navier-Stokes equation.