Text classification has been a major hot topic in natural language processing. Despite the significant progress made in this field, some deterrents have persisted, notable among which are features extraction from long and complicated sentences, and from sparse topic features. To address these issues, this paper proposes a method of text classification based on the combination of BERT and LDA classification methods (called BERT+LDA), focusing on effectively integrating the two to present a new fusion method. Our proposed fusion method begins with applying the LDA model on a document to assign topics to each sentence and to iterate over each sentence to derive the probability of each word with respect to the sentence topic. Word2Vec vectors are then generated for each word in a sentence by the Word2Vec model, which are weighted by their corresponding word probabilities to form weighted LDA Word2Vec vectors. Ultimately, the weighted LDA Word2Vec and BERT vectors are fused for classification. Experimental results prove that the weighted LDA Word2Vec vector that is generated creditably expresses sentence semantic, and the fusion method proposed in this paper fully combines the advantages of BERT semantic and topic vectors. It is also worth noting that the proposed method yields better results on short text datasets than traditional fusion methods.