The increasing integration of large language models into various domains has heightened the demand for transparency and interpretability in the decision-making processes of complex models. Traditional methods for achieving explainability have often fallen short, particularly when applied to sophisticated models like Google Gemini and OpenAI ChatGPT, which require novel approaches to fully unravel their internal logic. This paper introduces a comprehensive evaluation of model explainability through the innovative use of soft counterfactual analysis, a method that systematically generates minimally altered input scenarios to probe the consistency, sensitivity, and attribution mechanisms within the models. Empirical results indicate that while both models maintain a high degree of consistency and exhibit responsiveness to subtle input variations, challenges remain in ensuring transparency in tasks involving creative reasoning and numerical precision. Through detailed attribution mapping, the study further illuminates how different input features contribute to the models' outputs, revealing potential biases and guiding future improvements. The findings demonstrate the value of integrating explainability into the design and deployment of language models, advocating for the continued development of methods that enhance model transparency, user trust, and ethical AI use.