Big Data has emerged as an asset for business practices and marketing strategies due to its data analysis capabilities. For an effective decision-making process, reliable data is critically important. Unreliable data results in wrong decision making ultimately leading to a negative reputation, customer churn, and financial loss. Therefore, testing Big Data is vital to ensure the reliability of data. Defining test strategies and setting up a testing environment are complex and challenging tasks. In this research, we tend to focus on data reliability, since a wrong decision based on unreliable data can have huge negative consequences. The testing framework for Big Data is validated via an exploratory case study in the telecommunication sector. Three-layer architecture for Big Data testing is used, namely Ingestion, Preparation, and Access. Errors are injected to verify fault injection in the ingestion and processing of data files of the Hadoop framework and technology stack. The results of the testing framework show errors in three layers of Big Data. The testing framework effectively detects data faults in Big Data,and a total of 76.92% errors in the presentation layer of total failed test cases are identified. 48.72% of test cases are developed to detect errors to check data accuracy, and after execution of these test cases 31.58% errors are detected. The highest proportion of errors detected are related to the completeness, which is 66.67% of the number of errors injected for completeness. The results show the effectiveness of the framework to identify accuracy and completeness errors.