When the surface and subsurface floats move in the water, they emit sounds due to their propulsion engines as well as the rotation of their propellers. One of the best methods in underwater automatic target recognition (UATR) is to use deep learning to extract features and supervised train acoustic datasets that are used in the world’s naval forces. In this article, to achieve reliable results by deep learning methods, we collected the raw acoustic signals received by the hydrophones in the relevant database with the label of each class, and we performed the necessary pre-processing on them so that they become a stationary signal and finally provided them to the spectrogram system. Next, by using short-term frequency transformation (STFT), the spectrogram of high resonance components is obtained and used as the input of the modified MobileNet classifier for model training and evaluation. The simulation results with the Python program indicate that the suggested technique can reach a classification accuracy of 97.37% and a validation loss of less than 3%. In this research, a model has been proposed that, in addition to reducing complexity, has achieved a good balance between classification accuracy and speed.