With the diversified development of network business types and the rapid growth of network resource deployment scale, Traffic classification technology plays a crucial role in efficiently allocating network resources and effectively safeguarding network security. Traffic classification methods based on deep learning often demand a substantial labeled dataset to showcase exceptional performance. Nevertheless, Acquiring and annotating a large dataset involves significant investments in terms of time and human resources. Additionally, the constant innovation of internet technology accelerates the change of the network environment, making the painstakingly collected samples face the risk of becoming outdated. To tackle the aforementioned challenges, This research presents an approach called MTEFU, a deep learning model-based multi-task learning algorithm, to reduce the dependency on a substantial number of labeled training samples. In this algorithm, Multiple classification tasks were established, including duration length, bandwidth size, and business traffic category. The first two serve as source tasks, which do not require manually labeled sample training. The latter acts as a target task, requiring only a small label set. This strategy shares some parameters directly between tasks. The first few layers, which contain general information, are shared among multiple task networks, and different final layers are learned to handle different outputs. We use CNN, SAE, GRU, LSTM as multi-task learning classification models to conduct training verification and testing experiments on the QUIC dataset.We compared single-task learning methods and ensemble learning methods with our multi-task learning, and the results show that in the task of predicting network traffic types, the accuracy of the multi-task learning strategy with only 150 labeled samples can rival the 94.67% accuracy of single-task learning using a complete labeled dataset of 6139 samples.