Background subtraction is a crucial task in computer vision that involves segmenting video frames into foreground and background regions. While deep learning techniques have shown promise in this field, existing approaches typically rely on supervised learning and have limited generalization capabilities for unseen video data. Moreover, many of these methods are not suitable for real-time applications due to their offline or partially online nature. This paper introduces ORGRU, an unsupervised, online, and robust deep learning-based framework for background subtraction. ORGRU utilizes a robust version of Gated Recurrent Units (GRUs) to simultaneously estimate and maintain the background model as the low-rank component while calculating the sparse section as the foreground in a fully online manner. The model is iteratively updated in real time with an unsupervised learning algorithm utilizing only the current frame. To evaluate the effectiveness of the proposed approach, we conduct experiments on the LASIESTA dataset, which is a comprehensive, fully-labeled dataset for change detection covering various background subtraction challenges. The experimental results provide both qualitative and quantitative assessments, demonstrating the robustness and superiority of the proposed approach compared to the state-of-the-art methods.