Multi-round Conversational Recommendation (MRCR) system assists users in finding the items they need with the fewest dialogue rounds by inquiring about desired features or making tailored recommendations. Numerous models employ single-agent Reinforcement Learning (RL) to accomplish MRCR and improve recommendation accuracy. However, they overlook the diversity of conversational recommendations and primarily focus on popular features or items. It impacts the fair visibility of the items and results in an unbalanced user experience. We propose a diversity-enhanced conversational recommendation model (DECREC), which is built on our proposed multi-agent RL framework. Three agents col-laboratively determine the actions at each round of the MRCR and each agent autonomously explores and learns distinct facets of the task. Compared to a single agent, their collaboration fosters the exploration of a more extensive array of actions to improve diversity. Furthermore, we introduce a dynamic experience replay method that balances long-tail and head data ensuring each learning batch includes long-tail samples, keeping the model attentive to these less common but important data. Moreover, we integrate feature entropy into the feature value estimation process during training to encourage the model to explore a broader spectrum of features, thereby indirectly enhancing the diversity of recommendation results. Extensive experiments on four public datasets demonstrate that DECREC reduces bias in MRCR and achieves optimal recommendation diversity and accuracy. Our code is available at https://github.com/wzhwzhwzh0921/ DECREC.