In recent times Big Data analytics has got tremendous attention and it involves storing, processing, and analysing large scale datasets. The advent of distributed computing frameworks such as Hadoop and Spark offers an efficient solution to analyse vast amounts of data. Due to the availability of an application program ming interface (API) and its performance, Spark become very popular, even more popular than the MapReduce framework. Both these frameworks have more than 150 parameters and the combination of these parameters have a huge impact on cluster performance. The system default parameters help the system administrator to deploy their system applications without much effort, and they can measure their specific cluster performance with factory-set parameters. However, an open question remains: can new parameter selection improve cluster performance? In this regard, our study investigates the most impacting parameters such as input splits and shuffling, in order to compare the performance between Hadoop and Spark, using a specific cluster implemented in our department. We used a trial-and-error approach for tuning these parameters based on a large number of experiments. In order to evaluate the frameworks comparison and analysis, we select two work- loads: WordCount and TeraSort. The performance metrics are carried out based on three criteria, namely execution time, throughput, and speedup. Our experimental results revealed that both system performances heavily depends on input data size and correct parameter selection. The analysis results found, unsurprisingly, that Spark has better performance as compared to Hadoop, achieving up to 2 times speedup in WordCount workload and up to 14 times in TeraSort workloads when default parameters are replaced. Finally, we conclude that the system performance depends on different parameters configuration alternatives, and they depend on the data size.