Adversarial examples have begun to receive widespread attention owning to their potential destructions to the most popular DNNs. They are crafted from original images by embedding well calculated perturbations. In some cases the perturbations are so slight that neither human eyes nor monitoring systems can notice easily and such imperceptibility makes them have greater concealment and damage. For the sake of investigating the invisible dangers in the applications of traffic DNNs, we focus on imperceptible adversarial attacks on different traffic vision tasks, including traffic sign classification, lane detection and street scene recognition. We propose an universal logits map-based attack architecture against image semantic segmentation, and design two targeted attack approaches on it. All the attack algorithms generate the micro-noise adversarial examples by the iterative method of gradient descent optimization. All of them can achieve 100% attack rate but with very low distortion, among which, the mean MAE (Mean Absolute Error) of perturbation noise based on traffic sign classifier attack is as low as 0.562, and the other two algorithms based on semantic segmentation are only 1.574 and 1.503. We believe that our research on imperceptible adversarial attacks can be of substantial reference to the security of DNNs applications.