Nowadays, speaker disguise is a common operation that presents a great challenge to social security. Therefore, it is important to recognize the authenticity of speech. Most of the current researches focus on speech spoofing, which simulates a target speaker to break through the state-of-art ASV systems by increasing false acceptance rate. Meanwhile, there is another type of disguise, i.e. de-identification, which transforms a speech signal without a target to increase the false rejection rate in order not to be recognized. It has received far less attention. Therefore, in this paper, we investigate the de-identification model and propose a method to detect de-identification speeches from genuine speeches by using a very deep dense convolutional network with 135 layers. The experimental results show that the average accuracy of the proposed method outperforms the reported state-of-the-art methods.