Predictive maintenance (PdM) is an advanced technique to predict the time to failure (TTF) of a system. PdM collects sensor data on the health of a system, processes the information using data analytics, and then establishes data-driven models that can forecast system failure. Deep neural networks are increasingly being used as these data-driven models owing to their high predictive accuracy and efficiency. However, deep neural networks are often criticized as being “black boxes,” which owing to their multi-layered and non-linear structure provide little insight into the underlying physics of the system being monitored, and that are nontransparent and untraceable in their predictions. In order to address this issue, the layer-wise relevance propagation (LRP) technique is applied to analyze a long short-term memory (LSTM) recurrent neural network (RNN) model. The proposed method is demonstrated and validated for a bearing health monitoring study based on vibration data. The obtained LRP results provide insights into how the model “learns” from the input data and demonstrate the distribution of contribution/relevance to the neural network classification in the input space. In addition, comparisons are made with gradient-based sensitivity analysis to show the power of LRP in interpreting RNN models. The LRP is proved to have promising potential in interpreting deep neural network models and improving model accuracy and efficiency for PdM.