Natural language processing has seen substantial progress with the development of highly sophisticated models capable of understanding and generating human-like text. However, a persistent challenge remains in enhancing the accuracy of these models when dealing with domain-specific knowledge, particularly in avoiding hallucinations or generating plausible but incorrect information. The dynamic domain knowledge injection mechanism introduced in this research represents a significant advancement by allowing continuous integration and prioritisation of specialised information, thereby improving the model's performance and reliability. By dynamically adjusting the hidden weights of GPT-Neo based on domain relevance and accuracy, the modified model achieved higher precision, recall, and F1-scores, and exhibited reduced hallucination rates across diverse domains such as cybersecurity, medical information, financial data, and legal documents. A comprehensive evaluation framework, including benchmark creation and performance metrics, validated the effectiveness of the approach, demonstrating that dynamic domain knowledge injection can substantially enhance the utility of large language models in specialised fields. The results highlight the transformative potential of this method, offering a robust pathway for the development of more accurate and contextually aware language models. Detailed analysis and ablation studies further elucidate the contributions of each component within the modification process, providing critical insights into the optimisation and future applications of this innovative approach.