Massive open online courses (MOOCs) have revolutionized education by democratizing access to knowledge and skills for learners worldwide, fostering an era of unprecedented global learning opportunities (Soraya et al., 2019). This unprecedented surge in online learning opportunities has presented a new challenge: helping learners navigate the vast landscape of courses and identify those that align with their unique interests and learning goals (Suresh & Srinivasan, 2020). Recommendation systems have emerged as a powerful solution to this challenge, employing sophisticated algorithms to offer students appropriate courses based on their past performance and preferences.
Traditional recommendation systems often rely on collaborative filtering (Li et al., 2020) or content-based filtering techniques (Choi et al., 2010). Collaborative filtering algorithms identify similar learners and recommend courses that those similar learners have enjoyed (Zheng et al., 2018). Content-based filtering algorithms recommend courses that align with a learner’s interests and past enrollment history, identifying courses similar to those they have previously engaged with (Ghauth & Abdullah, 2009). While these techniques have proven effective in various recommendation scenarios, they have limitations in capturing the complex relationships between learners and courses in the context of MOOCs.
MOOCs provide a rich tapestry of graph-structure data that can be leveraged to develop more effective recommendation systems (Jiang et al., 2023). In addition to explicit learner preferences and course descriptions, MOOCs generate a wealth of implicit data (Tian & Liu, 2021), such as course enrollments, video interactions, and forum discussions. This implicit data encodes valuable information about learners’ interests, learning styles, and engagement patterns. Unfortunately, this hidden information is not well used to improve recommendations.
Secondly, traditional recommendation systems for MOOCs often fail to capture the sequential nature of learners’ course enrollments, leading to suboptimal recommendations (Khalid et al., 2021). This is because these systems rely on content-based or collaborative filtering algorithms, which do not specifically take learners’ sequence of course enrollment into account (Wang et al., 2017). As a result, these systems may recommend courses that are not aligned with learners’ current interests or learning goals. For example, a learner who has enrolled in a series of introductory programming courses is more likely to be interested in intermediate programming courses than a learner who has enrolled in a random assortment of courses from different disciplines. The LSTM attention mechanism is able to capture these sequential patterns by learning long-range dependencies in learners’ course enrollment history (Hu & Rangwala, 2019).
Graph Neural Networks (GNNs) have become a robust tool for modeling intricate relationships within graph-structured data, providing an effective solution to challenges associated with leveraging the information embedded in such data structures. (Wang et al., 2019). GNNs are able to effectively capture the interactions between learners and courses, taking into account the rich network of relationships that exists among them (Chang et al., 2022). However, GNNs alone may not be sufficient to capture the sequential patterns of learners’ course enrollments. The long-short-term memory (LSTM) attention mechanism emerges as a key player in resolving the issue (Van Houdt et al., 2020). LSTMs, a variant of recurrent neural networks (RNNs), are particularly adept at capturing patterns and dependencies in sequential data (Moghar & Hamiche, 2020). They are able to learn long-range dependencies in time series data, making them ideal for capturing the temporal dynamics of learners’ course enrollments. The LSTM attention mechanism (Shewalkar et al., 2019) addresses this problem by learning long-range dependencies in learners’ course enrollment history, enabling the system to identify patterns in learners’ course preferences and provide more accurate and personalized recommendations.
In this article, the researchers propose an Attention Graph Neural Network (AGNN) with a long-short-term memory (LSTM) for recommendation system in MOOCs. The AGNN captures the interactions between learners and courses, while the LSTM attention mechanism focuses on the sequential patterns of learners’ course enrollments. Specifically, (1) to discover the complex connections found in MOOC graph-structured data between students and courses, we represent learners and courses as nodes, and learner learning behaviors and course associations are represented as edges so that we can use GNN to iteratively aggregate information from neighbor nodes to learn node representations. (2) to address both the evolving user preferences and the recency of items, first, we create a course sequence graph that includes all learners. We then learn course representations using a modified graph attention network with LSTM attention mechanism. Subsequently, we input these representations into a sequence encoder that is based on a GRU to deduce their short-term patterns, and we extract the final hidden state as the sequence-level learner embedding that has been learnt. Finally, we use BPR learning to recommend courses based on the representations of users and courses generated by the attention mechanism GNN, as well as the short-term course sequence representations. The proposed model outperforms traditional recommendation methods, demonstrating the effectiveness of incorporating graph-based and sequential information for course recommendation in MOOCs. Our contributions are as follows:
We propose a novel AGNN with an LSTM attention mechanism for a recommendation system in MOOCs.
We employed real-world MOOC datasets to benchmark the proposed model against traditional recommendation methods, showcasing its superior performance.
We conducted an in-depth analysis of the role of attention mechanisms in model recommendation.
The subsequent sections are organized as follows: Section 2 delves into relevant research on graph neural networks and long-short-term memory (LSTM) attention mechanisms for recommendation systems. Section 3 constitutes the core of our research methodology, meticulously detailing the proposed AGNN framework. Section 4 presents a comprehensive performance evaluation of AGNN, encompassing the experimental setup, data description, model comparisons, and insightful discussions. Section 5 concludes the study by summarizing the experimental findings and offering valuable suggestions for future research endeavors.