To meet the demands of massive connections in the Internet-of-vehicle (IoV) communications, non-orthogonal multiple access (NOMA) is utilized in the local wireless networks. In NOMA technique, power multiplexing and successive interference cancellation techniques are utilized at the transmitter and the receiver respectively to increase system capacity, and user grouping and power allocation are two key issues to ensure the performance enhancement. Various optimization methods have been proposed to provide optimal resource allocation, but they are limited by computational complexity. Recently, the deep reinforcement learning (DRL) network is utilized to solve the resource allocation problem. In a DRL network, an experience replay algorithm is used to reduce the correlation between samples. However, the uniform sampling ignores the importance of sample. Different from conventional methods, this paper proposes a joint prioritized DQN user grouping and DDPG power allocation algorithm to maximize the sum rate of the NOMA system. At the user grouping stage, a prioritized sampling method based on TD-error (temporal-difference error) is proposed to solve the problem of random sampling, where TD-error is used to represent the priority of sample, and the DQN takes samples according to their priorities. In addition, sum tree is used to store the priority to speed up the searching process. At the power allocation stage, to deal with the problem that DQN cannot process continuous tasks and needs to quantify power into discrete form, a DDPG network is utilized to complete power allocation tasks for each user. Simulation results show that the proposed algorithm with prioritized sampling can increase the learning rate and perform a more stable training process. Compared with the previous DQN algorithm, the proposed method improves the sum rate of the system by 2% and reaches 94% and 93% of the exhaustive search algorithm and optimal iterative power optimization algorithm, respectively. While the computational complexity is reduced by 43% and 64% compared with the exhaustive search algorithm and optimal iterative power optimization algorithm, respectively.