Unsupervised clustering analysis (CA) models are crucial in various machine learning tasks aiming to discover the data structure and hidden patterns between samples. The performance of CA is affected by overlapping issue (samples that are similar to each other but do not belong to the same cluster). These samples may occur when the data grows over time, which causes the need to redefine features , the distance matrix, centroids, or other parameters. As recent CA models achieved optimal solutions, the details of parameters have been ignored due to the “no-free lunch” theory, which raises the question about their reliability. This study conducts an in-depth investigation of the complex relationships among the given issues, aiming to clarify the key problems associated with unsupervised models that may lead to inaccurate prediction of patterns, inappropriate partitioning , and false confidence in outcomes. We designed a novel experiment* that produced over 271 million possible settings based on various models, equations, and procedures. The experimental results* demonstrate that obtaining optimal parameters for optimal solutions is as challenging as finding a needle in a haystack. Finally, we designed an open issue* that challenges the CA models because our experiments raised doubts over the use of supervised information upfront to determine the optimal solutions, which is against the concept of CA.