Customer service chatbots have become integral to the efficient operation of many businesses, offering scalable solutions to handle vast volumes of customer interactions. However, ensuring that these chatbots generate accurate, contextually appropriate, and coherent responses remains a significant challenge, particularly as the complexity of customer queries increases. The research presented introduces a novel approach to optimizing chatbot performance through an in-depth comparison of various finetuning strategies and evaluation metrics, demonstrating that Domain-Adaptive Pretraining (DAPT) provides superior accuracy, robustness, and relevance in customer service scenarios. A comprehensive experimental analysis was conducted across three distinct large language models, revealing that while DAPT excels in producing high-quality, resilient responses, parameter-efficient finetuning methods offer a resource-efficient alternative suitable for environments with limited computational capabilities. The study’s findings have critical implications for the development and deployment of customer service chatbots, emphasizing the need for careful selection of finetuning strategies aligned with specific operational requirements.