Decentralized Federated learning is a distributed edge intelligence framework by exchanging parameter updates instead of training data among participators, in order to retrain or fine-tune deep learning models in a peer to peer way. Considering the various topologies of edge networks, the impact of transmission delay of updates during model training is non-negligible for data-intensive intelligent applications at edge, e.g., intelligent medical services, automated driving vehicles etc.. To address this problem, we analyze the impact of delayed updates for decentralized federated learning, and provide a theoretical bound for these updates to achieve model convergence. In addition, we propose novel decentralized federated learning (delay-adaptive-DFL) to enhance the corporate strategy of edge computing devices. It releases the requirement for aggregating the model updates of edge devices in a fixed time period. Within the theoretical bound of updating period, the latest versions for the delayed updates are reused to continue aggregation, in case that the model parameters from a specific neighbor are not collected or updated in time. Regarding four neural network models, we implement experiments on a real-world test bed, and demonstrate that delay-adaptive-DFLis more efficient than the latest baselines.