Topic detection is the process of automatically identifying topics within text data. The manual execution of this task becomes challenging for large-scale datasets due to its labor-intensive nature, and it needs machine learning for automated processing. A prevalent method in topic detection is clustering through Eigenspace-based Fuzzy C-Means (EFCM), utilizing a standard TFIDF as the text representation. However, TFIDF only pays attention to the frequency of words and does not consider the semantics of these words in the text. Bidirectional Encoder Representation from Transformer (BERT) is a pre-trained model which means that it has learnt the representations of the words and sentences as well as the underlying semantic relations that they are connected and has shown significant advantages over text representation in many Natural Language Processing (NLP) tasks. This paper extends the EFCM model using BERT instead of the standard TFIDF as text representation. Then, we use TFIDF on each cluster (c-TFIDF) to generate the most frequent words to represent the topics. Our simulations show that the BERT representation improves topic coherence scores of the EFCM model for topic detection. If we measure the topic coherence scores using TC-W2V, the BERT-based EFCM model scores better on the two datasets. Meanwhile, the model achieves better on all three datasets using the CTC measurement unit.