Medical education and training should at least engage learners, increase their knowledge, and enhance their decision making. Ideally, education and training lead to practice change and improved patient outcomes. The Oxygen Therapy and Critical Care series was well-received by learners and panelists alike. Quality across all indicators was deemed high and the peer-review process revealed strengths and opportunities. Program partners have continued this work, launching an additional 10 learning hubs across Africa and Latin America led and adapted by local facilitators and panelists. To date this series has impacted over 8000 learners from 145 different countries.
Despite much success, there remain challenges for online learning. Learning platforms require computers and internet access, limiting learners where internet is low bandwidth, inconsistent, and cost-prohibitive. While mobile applications offer solutions, those without computer access still experience disadvantages. Digitized resources and guidelines are too often tailored to clinicians working in high-resource settings, adding implementation difficulties for low-resource settings.15,16 Meanwhile, some standards of critical care in HICs have actually shown worse outcomes for LMIC patient populations.17,18 Lack of faculty development and funding leaves educators without technical skills or time to create online modules, especially when quality targets are ill-defined and digital scholarship is less recognized. Without widespread adoption of quality indicators for online resources, learners are forced to trust information that is most accessible, a set-up for misinformation.
Quality indicators for journals do not neatly apply to online content. There are several inherent limitations. Journal Impact Factor (JIF), determined by the number of citations of an article divided by the total number of article citations, fails to account for several biases. JIF is biased towards English-language publications and HICs. It assumes equal access to publication and equal access to journal subscriptions. It does not consider whether information is openly accessible. It also does not represent digitized resources. Further, JIF is not useful prospectively ensuring quality in design of online resources.
There is a Social Media Index (SMi) tool assessing quality of online learning resources.19 However, like the JIF, SMi discounts quality by time in existence, potentially discrediting the value newer resources bring. Others have validated tools specific to their medical specialties as is the case for Medical Education Website Quality Evaluation Tool for Pathology and the Modified Education in Otolaryngology Website assessment tool.20,21 Alternative metrics or Altmetric scores consider other forums such as tweets, blog mentions, or google posts, however these occur post-publication leaving barriers to publication intact.
Contributors to the LRC online content found the 13 quality indicators developed by Lin et.al. easy to implement during content design and useful during peer-review. This suggests clear quality indicators can inform both prospective design and retrospective review. Additionally, LMIC respondents found these indicators appropriate for their settings. The quality indicators included for relevance and accessibility carry even greater importance when learners face resource challenges.
For journal articles, peer-reviewers are typically assigned by journal editors due to their reputation in a particular field. Reviewers are more likely selected if their professional network overlaps with that of the journal editors. LMIC experts are selected as peer-reviewers far less often than their HIC peers. Barriers to publication and access to journal subscriptions are well documented, forcing many LMIC authors to publish in non-peer-reviewed “grey literature”.22,23,24 With a paucity of journals originating in LMICs, with decreased access to professional societies, with barriers to research and publication, it is difficult for LMIC providers to become peer-reviewers.
Close to eighty percent of the world’s population lives in LMICs.25 We argue it is imperative that LMIC providers be represented. There is an unfortunate assumption that clinicians in HICs have greater ability or knowledge than their LMIC colleagues simply because of greater access to resources. There is also an assumption that increased spending on health care always improves patient outcomes. On some levels this is true. On many levels it is not. Access to resources does not necessarily make individuals better clinicians, but it can improve the system in which they work. Most HIC countries have better health outcomes than their LMIC neighbors, however, this is not always the case. The USA is an example of high spending not leading to optimal patient outcomes.26 In fact, HIC providers have much to learn from their LMIC colleagues. When resources are constrained, individuals are forced to employ creative problem solving, to innovate, and to conserve precious resources. LMIC partners must be included in education and publication to inform cultural perspectives, available resources, and appropriate interventions for the most common local diseases and injuries. Changing the narrative to focus on the unique perspectives and strengths of LMIC providers changes the focus to their effectiveness rather than the historical perception of LMIC provider dependence on HIC partners.
During this project, the AliEM criteria were assessed, but were not used to exclude panelists as peer-reviewers. All panelists for this course who responded to the survey served as peer-reviewers for the online content. Although most panelists had been speakers at conferences and authors of peer-review articles, the percentages were less for LMIC experts. Unfortunately, these criteria again limit representation of experts from LMICs and bias towards HIC experts. Additionally, with the migration away from textbooks, one should ask how relevant is authorship of textbooks or textbook chapters in an age of digitized learning. LMICs authors experience barriers for recognition as experts in their fields – a disservice to their patients and the public at large.27,28 The process of using webinar panelists as peer-reviewers is feasible, appropriate, and presents opportunities for experts from low- and middle-income countries.
Survey questions for the peer-review of the LRC content were structured around Glassick’s six standards of scholarship. However, two standards not included in the survey assessment were clear goals and reflective critique. Clear goals were represented in each module’s Practice Change Objectives and were reviewed explicitly by panelists during webinar sessions. To further address the clarity of goals and reflective technique, partners utilized an iterative process to survey panelists and learners throughout the series. Additionally, two frequently asked questions sessions were set to obtain real-time feedback. However, novel information may have been gained by including additional survey questions.
The ideal goal of education and training is improved patient outcomes. This is inherently difficult to measure. Without evaluating outcomes after specific educational interventions, impact on learners’ practice is largely self-reported. Only fifty-six percent of survey respondents felt the blended-learning course was extremely likely to result in practice change improvements with a larger percentage selecting somewhat likely. Education and learning should not only lead to knowledge and recall but should also lead to informed decision making resulting in appropriate intervention. This blended-learning course received positive evaluations from learners and panelists alike, but patient outcomes were not measured. This is a limitation of online learning when not coupled to evaluation of real-life patient care. There is increasing interest in remote simulation, virtual and augmented reality, and online simulation games as future improvements.
Although specific to oxygen and critical care during COVID-19, this blended-learning course proved feasible and readily adaptable to other content areas. It did require considerable resources. The LRC, an online learning platform, does require an annual subscription which is cost-prohibitive for some institutions. Further, a significant time commitment was needed for faculty designing the online content. However, the value of the LRC is found in its interactive features and wide accessibility. The case-based module design facilitated ease of instruction for webinar panelists and allowed for downloadable facilitator manuals.
Facilitating live webinar sessions required additional investments in time and cost. This included the time and effort to prepare the lesson plan, as well as approximately 2–3 hours spent by panelists participating in each session. Panelists from LMIC settings were provided small stipends. Time and cost also had to be budgeted for administrative staff organizing webinars, managing invitations, registrations and other logistics. Simultaneous interpretation offered for most sessions and in multiple languages contributed significantly to overall costs.
Of note, advertising is not permitted on the LRC, however, some live links contain advertisements. Approximately 6% of reviewers were unable to distinguish between content hosted on the LRC and that of other sites, which may be a function of digital familiarity. Digital literacy was a shared challenge for all, including contributors, facilitators, and learners, but did improve over time with increased usage. Internet interruptions were disruptive for learners and panelists alike regardless of HIC or LMIC. There was initial concern about sustaining user interest using an online webinar format as compared to in person training, however, partners were pleasantly surprised by robust interaction and engagement during sessions with constant participation through the webinar chat and polls. Engagement carried through between sessions on an active WhatsApp group, creating a community of learners and mentors.
Online education during COVID was initially meant to circumvent barriers to in-person learning, but this model demonstrates its utility as a strategy to continue democratizing medical education and supporting workforce capacity building. This peer-review model of an online medical course designed for global workforce has not been done before and sets a precedent for future programming. Online medical education and short courses are not intended to replace specialty training programs. However, open-access educational resources are invaluable to providers otherwise lacking formalized training, professional societies, and continuing education opportunities. This is especially true for LMIC providers working in isolated or remote locations. Providing remote mentorship improves provider connectedness and group problem-solving. Meeting identifiable quality indicators decreases misinformation and provider confusion through provision of well-organized, high-yield, evidence-based, practice recommendations. Including, and recognizing as peers, experts from variable practice settings, including LMICs, ensures relevance to localized patient populations.