Deep neural networks (DNN) have been shown to suffer from critical vulnerabilities under adversarial attacks. This phenomenon stimulated the creation of different attack and defense strategies similar to those adopted in cyberspace security. The dependence of such strategies on attack and defense mechanisms makes the associated algorithms on both sides appear as closely reciprocating processes, where the defense method are particularly passive in these processes. Inspired by the dynamic defense approach proposed in cyberspace to solve this endless arm race, this paper defines the model order, network structure and smoothing parameters as ensemble variable attributes and proposes a stochastic strategy to builds upon ensemble model through heterogeneous and redundancy models. The proposed method introduces the diversity and randomness characteristic to deep neural network to change the fixed correspondence gradient between input and output. The unpredictability and diversity of gradients makes it impossible for attackers to directly implement white-box attacks so as to handle the extreme transferability and vulnerability of ensemble models under white-box attacks. Experimental comparison of ASR-vs-distortion curves with different attack scenarios shows that even the attacker with the highest attack capability cannot easily exceed the attack success rate associated with the ensemble smoothed model, especially under untargeted attacks.