4.1 Study Selection
Figure 1 PRISMA Screening flowchart for study selection.
The PRISMA Screening flowchart for study selection is shown in Figure 1. Of the 15 studies included in this rapid review (Table 1), nine were from the US, two from the UK and one each from Africa, Ireland, China, and the Netherlands.
Table 1 Overview of Studies
Ref
|
Reporting Period (Approx) (2020)
|
Country
|
Number Enrolled
|
Patient Type
|
12
|
NA
|
US
|
83
|
Low to moderate risk ED patients
|
13
|
March to May
|
US
|
2652
|
Confirmed / presumed COVID-19
|
14
|
NA
|
Ireland
|
26
|
COVID-19 positive with pulmonary infiltrates, without current need for supplemental oxygen
|
15
|
April, May
|
US
|
112
|
Symptoms consistent with COVID-19
|
16
|
NA
|
Rwanda/Uganda
|
NA
|
COVID-19 cases and contacts in home isolation
|
17
|
NA
|
US
|
2000
|
Patients at risk of developing a serious case of COVID-19
|
18
|
January, February
|
China
|
188
|
Home quarantined confirmed or suspected cases
|
19
|
April, May
|
Netherlands
|
33
|
COVID-19 patients with clinical improving trend and oxygen therapy tapered down to a maximum of 3 L/min -f
|
20
|
April to June
|
US
|
225
|
Patients with COVID-19 upon hospital discharge
|
21
|
May
|
US
|
50
|
Low- and moderate risk COVID-19 with oxygen saturation of <92% during the hospital stay.
|
22
|
March to May
|
US
|
2255
|
Patients with COVID-19 symptoms
|
23
|
April to June
|
US
|
924
|
After testing positive to COVID-19 or after hospital discharge for COVID-19.
|
24
|
May to June
|
UK
|
192
|
Patients discharged from the ED with suspected COVID-19.
|
25
|
April to June
|
UK
|
279
|
Patients who were deemed likely to have COVID-19 pneumonia and were discharged
|
26
|
March to May
|
US
|
154
|
Confirmed and suspected COVID-19
|
The studies, when reviewed by the lead author, suggested key common characteristics outlined under four headings: context, technology, process, and metrics with each then divided into 4 sub-headings as shown in Table 2.
Context
|
Technology
|
Process
|
Metrics
|
Dates
|
Provider
|
Markers
|
Patients Enrolled
|
Rationale
|
Communications Platform
|
Data Input Frequency
|
Alerts/Escalations
|
Patients
|
Patient Equipment
|
Thresholds
|
Patient Acceptance
|
Medical Team
|
Training
|
Discharge
|
Patient Adherence
|
|
|
|
|
Table 2 Headings and sub-headings for synthesis
The studies were then synthesized according to these headings and sub-headings with the results shown in Appendices 1 to 4.
4.2 Description Of Implementations
The meanings of these headings along with summaries of reporting from included studies are given below.
4.2.1 Context (Appendix 1)
The individuals involved in the RPM implementation, both patients and health care provider staff are covered here, along with the rationale for patient enrollment and the dates on which monitoring occurred.
Dates [The specific time period for the reported RPM]
Four studies did not explicitly provide the dates covered [12, 14, 16-17]. All studies giving dates dealt with the early stages of the pandemic.
Rationale [Reasons for implementation of RPM]
Two rationales for RPM were evident from these published studies: COVID-19 positive patients discharged from hospital to enable safe and early discharge with on-going RPM [12, 14, 19, 21, 23-24]; and RPM for suspected or confirmed COVID-19 patients in the community to ensure only those patients who required hospital treatment were admitted [13, 15- 18, 22, 25, 26]. Only one study incorporated both [23].
Patients [Patients included in RPM]
Patients were described in terms of confirmed or presumed COVID-19 positive [13, 20, 23-24, 26], low to moderate risk [12, 21], at risk of developing serious COVID-19 [17], having symptoms of COVID-19 [15, 18, 22], and had or have COVID pneumonia [14, 25]. The least ill patients were home quarantined suspected cases [16, 18] whilst the most complex were patients who, after hospital discharge, still required oxygen therapy at home [19].
Remote Monitoring Medical team [Personnel involved in care of patients included in RPM]
Most medical teams included nurses and physicians [12-14, 17-19, 21-24], one study also included a psychologist [19] and another a neurosurgeon [15]. Medical trainees were also a feature, supervised by more senior staff [20, 23, 26]. One study noted that RPM allowed staff who were themselves quarantining to continue working from home by providing a service to remotely monitored patients [18].
4.2.2 Technology (Appendix 2)
Technology played a major role in most implementations of RPM. However, some implementations did not use specialized technologies, merely phoning patients daily and asking about symptoms. This section covers the provision of technological approaches to RPM, the major messaging mode for transfer of information between patient and provider, the monitoring equipment provided to the patient, along with any training or instruction given to the patient in how to use the equipment.
Provider [Supplier of technology to enable RPM]
One site described sufficient in-house expertise to develop its own system for monitoring [12]. Some used proprietary systems from suppliers of medical monitoring systems [14-15, 18-21, 23, 26] Others adapted systems that were in use prior to COVID-19 [13, 22] with the remainder using non-RPM specific systems such as the standard telephone system or packages [17, 24, 25] like Zoom or WeChat (A Chinese app comparable to WhatsApp). One African implementation used a text-based system from a non-profit orientated Canadian company called WelTel [16].
Communications Platform [Type of communications used for patient to healthcare provider and vice versa]
One of the essential elements of any RPM system is the transfer of information from the patient to the provider at regular intervals. The method of transfer varies widely with mobile technologies often, but not always, involved. There is also the matter of automated transfer, without provider intervention or involving health care personnel. The simplest method is regular phone calls from care provider personnel to the patient with information transferred by phone. Two studies used this method [24, 25]. A further study [21] used this method initially but given the burden on staff found it was not sustainable and changed to a more automated system from a provider of RPM systems.
Two studies used standard text messaging. One [13] in the US sent a twice daily text message to initiate a text question and response exchange. Another from Africa [16] detailed a very similar system but with a daily semi-automated initial text.
Email was used by two systems, one to send a link to a form at the start of monitoring which the patient was “required” to update daily [12], the other sent a link daily to a survey via email [17].
Use of mobile apps for data input and transfer was common, reported in eight RPM implementations [14-15,19-23,26].
Patient equipment [medical and other devices used as art of RPM]
Of the 15 implementations, 11 provided patients with a pulse oximeter on enrollment to enable patients to monitor their blood oxygen levels (SP02). Of these, four also provided a thermometer [17, 12, 20, 26], two provided an oximeter only to high-risk patients [15,24] with the remainder [14, 19, 21-22, 25] provided an oximeter to all patients. In the case where no equipment was provided, three studies incorporated SPO2 and temperature measurements from patient owned oximeters and thermometers [15, 23, 24]. Three studies did not provide any equipment [13, 16, 23] but one of these [23] used oximetry and temperature measurements where available.
Whilst some studies provided an oximeter and thermometer to high-risk patients, one study provided these to low-risk patients and provided high-risk patients with a cellular-enabled tablet telehealth system that monitored for blood pressure, heart rate, temperature, weight, and oxygen saturation [17].
Patient training [familiarization or training provided to patients specific to RPM]
7 studies did not mention any patient training or familiarization process [13-14, 16-19, 26]. For those that did, training took one of two forms: 1) provision of teaching materials such as leaflets or on-line videos and 2) personal outreach such as nurse contact. Four studies mention provision of teaching materials only [12, 22, 24-25], two used personal outreach only [21, 23] with two providing both[15, 20]. Only one report mentioned a technical support contact [15]. Whilst most studies concerned technical information such as how to use an oximeter or download an app, two [23-24] provided non-technical COVID-19 information relating to home isolation and infection control.
4.2.3 Process (Appendix 3)
Process describes various aspects of monitoring: what is monitored - known as markers, how often is it monitored (input frequency) and what thresholds are applied for escalation. It also covers discharge from the monitoring program, typically after a default period without complications, though some studies gave more specific criteria.
Markers [physiologic and other indicators of health status monitored during RPM]
Patient data monitored, or markers, consists of physiologic data and self-reported symptoms.
Of the 10 studies that provided information on physiological markers, all included SpO2 [12,14-15,17,19,20-21,23,25,26]. Heart rate was reported by four [12,15, 17, 21], and temperature by six [15, 17, 19-20, 23, 26]. Respiratory rate was included by two [15, 21] but how this is measured is not noted.
All studies monitored symptoms, with five not indicating what these symptoms were [16-17, 22, 24, 25]. Of the remaining 10, all monitored for dyspnea, three for cough [18, 20, 23], diarrhea [15, 18, 23] and weakness [18, 20, 23], two for chest pain [15, 18], and two for vomiting [20, 23].
Data Input Frequency [how often marker information is sent to healthcare provider]
The frequency of data input by the patient varied across studies. 10 sites required once daily input [12, 15, 16, 18-20, 22-24, 26] ; three twice daily [13, 17, 21] and one 4 times daily [14]. Most studies reported on some type of prompt sent to the patient via the transfer method when input is due. However, two sites requiring once daily input made no mention of prompts, simply stating that the patient was instructed [15] or required [18] to enter data once daily.
Thresholds for Escalation [levels of patient health status that initiate a cause for concern type alert to the healthcare provider]
Seven of the studies [13, 15, 18, 20, 22-23, 26] reported that a patient can initiate an escalation themselves at any time via the RPM system.
New or worsening symptoms was specified by nine studies [12-13, 15, 17, 19-20, 22-23, 26].
For studies that monitored SpO2, a resting value of less than or equal to 94% [12, 14, 15, 25], less than 92% [20], less than 90% [21] or when the difference in levels between resting and post exertion exceeded 5% [25] resulted in escalation.
One report gave a temperature of greater than 37.91oC as a cause for escalation [20]. Heart Rate (HR) threshold criteria for escalation, in beats per minute, were also reported: HR greater than 105 [12], HR greater than 100 [15], HR greater than 115 at rest or greater than 125 twenty seconds post exertion or a difference greater than 10 pre – post exertion [21].
Escalation values for respiratory rate, in breaths per minute, were: greater than 20, [15], greater than 22 at rest or greater than 30 at 20 seconds post exertion or a difference greater than 8 pre – post exertion [21].
Discharge [Conditions under which a patient is discharged from RPM]
14 days was the typical default timescale for leaving the monitoring programme. Of the studies that gave specific clinical criteria to enable discharge, two required oxygen saturation levels to be greater than 96% [12, 21] for 3 days with one requiring normal oxygen saturation level for 3 days [25]. The normal level was not specified by the studies but is usually considered to be 95% to 100% [27]. One of these [12] also required a heart rate of less than 100 bpm and a temperature of less than 37.96 o C. One study allowed patients to optionally extend the monitoring period from 14 days to 21 days [20].
4.2.4 Metrics (Appendix 4)
Numbers can allow us to appreciate the scale of monitoring more fully for each report and provide comparative information on how many patients required escalation. In this section, we provide the number of patients enrolled in each implementation and the numbers of those who were escalated. Also included here are metrics regarding patient acceptance of RPM and adherence to the data inputting requirements.
RPM Enrollment [Number of patients enrolled in the RPM implementation]
All but one study reported numbers for patients enrolled in RPM. Four involved between 1,900 and 3,000 patients [13, 17, 22, 23]; six involved between 112 and 295 patients [15, 18, 20, 24, 25, 26]; while four involved between 26 and 83 patients [12, 14, 19, 21]
Escalation [Numbers of patients in RPM program requiring intervention beyond baseline care]
Escalation involved an admittance or readmittance to hospital, or merely a short interaction via phone or video with a health care worker. Only a minority of patients typically escalate. For one large programme with over 2000 patients enrolled, 83% were managed without escalating to human care [13]. Another, with just under 1000 enrolled, reported that about 10% of patients presented with symptoms requiring escalation to a virtual provider, and 2% required admission to hospital [23]. For one of the smaller implementations with 26 enrolled [14], the 26 patients generated 51 alerts, which in turn generated 5 reassessments leading to readmission of 4 patients. However, a study with 83 participants [12] stated that 60 patients triggered an automated flag at least once, 39 patients were escalated to a telehealth consult and 17 patients were referred to the ED.
Patient acceptance of RPM [How well the RPM program was accepted by patients]
Four studies reported patient feedback on RPM. All reported high acceptance. One reported a high net promoter score of 80 [13]. The net promoter score is a single metric that quantifies the response to a single direct survey question: How likely are you to recommend this service? [28]. 91% of patients provided feedback in one study using a satisfaction questionnaire based on Consumer Quality Index in General Practice [19], with 97% of those finding the system user friendly. 46% provided feedback in another [21] with 94% saying they would recommend to a friend. 66% provided feedback in a further study with 99.5% being likely or very likely to recommend to a friend [25].
Patient Adherence [Compliance with patient requirements to provide health status data to healthcare provider]
Patients were requested and usually prompted to input their data to the system on a regular basis. How well they complied, or patient adherence, has been indicated by 8 of the studies to varying extents. Adherence varied. One study involving hospital discharge [14] that prompted patients 4 times daily for input showed a median daily input of 3.9 for those that did not require readmission and 5.7 for those readmitted, indicating high adherence. Another study involving hospital discharge requiring daily input indicated that patients were monitored for an average of 21.8 days and completed an average of 14.5 daily survey responses suggesting a somewhat lower adherence [12]. A study sending twice daily check-in prompts saw a 59.7% response to both, 27.5% to one and 12.8% to neither [13].
4.3 Reporting Consistency
The types of information reported on varied across studies. Four did not provide any date information [12, 14, 16, 17]; seven did not provide details of patient training [13-14, 16-19 26]; one did not provide information on markers or input frequency [24]; two did not provide details of escalation thresholds [16,18]; four did not indicate discharge conditions [15, 17-19]; one did not indicate number of patients enrolled [16]; two did not indicate number of escalations [16,17]; eleven gave no indication of patient acceptability [12, 14-18, 20, 22- 24, 26] and seven gave no indication of patient adherence [9, 16- 18, 24-26]. None mentioned staff training or staff acceptance. This is not a criticism of any study but strongly indicates a need for greater consistency in reporting of RPM implementations to support learning and meaningful comparison.
4.4 Framework
Based on the headings derived from synthesis of the studies, with the addition of health staff acceptance and training mirroring patient considerations, we propose a framework of criteria for reporting of RPM for COVID-19 patients shown in table 3.
Criteria
|
Item no
|
Notes
|
|
Dates
|
1
|
Clearly state the implementation dates covered by the study.
|
|
Rationale
|
2
|
State the specific purpose and context of the implementation, e.g., step-down, preserve bed capacity, increase personnel efficiency, prevent iatric outcomes
|
|
Patients
|
3
|
State type of patient catered for by implementation, illness severity, suspected or confirmed cases etc.
|
|
Medical team
|
4
|
Detail range of medical personnel involved in the implementation: roles, medical specialties, seniority, dedicated or shared etc.
|
|
Technology provider
|
5
|
Provide details of the technology provider: inhouse build, adoption of system already in place, commercial provider along with name and head-office location.
|
|
Communication mode
|
6
|
Detail the provider patient mode of communication e.g., text, web form, phone call, smartphone app
|
|
Patient equipment
|
7
|
Describe the type and make of patient equipment utilized and detail how it is provided e.g. user provided, provided by healthcare system and how it is delivered and returned.
|
|
Patient training
|
8
|
Describe what patient training is provided regarding equipment, procedures and self-care and how this training is conducted
|
|
Staff training
|
9
|
Describe what staff training is provided regarding equipment, procedures and communication with patients and how this training is conducted
|
|
Markers
|
10
|
Detail what is monitored and how, specifically physiologic markers and self-reported symptoms.
|
|
Data Input Frequency
|
11
|
Detail how often patient data is input to the RPM system, whether done automatically or manually by the patient and describe the frequency and mode of any prompts for input provided to the patient by the system. Outline the procedure followed when input is not received.
|
|
Thresholds for Escalation
|
12
|
Clearly present the escalation procedure. Detail marker thresholds that trigger a cause for concern alert and how such an alert is sent and received. Detail the number and types of escalation stages, e.g., initial screening by paramedic followed, if deemed apt, by further escalation to physician.
|
|
Discharge
|
13
|
State the conditions under which a patient is discharged from RPM e.g., after certain duration, marker thresholds, clinical judgement.
|
|
Metric-RPM Enrollment
|
14
|
Clearly state the number of patients monitored over the course of the reported implementation along with subgroup breakdowns.
|
|
Metric-Escalation
|
15
|
Provide numbers for cause for concern alerts, and escalations by level.
|
|
Metric-Patient acceptance of RPM
|
16
|
Report on patient acceptance using net promoter score or other suitable instrument.
|
|
Metric – Staff acceptance of RPM
|
17
|
Provide some indication of staff acceptance using suitable scale
|
|
Metric-Patient Adherence
|
18
|
Report on the frequency of patient data input compared to the prescribed frequency.
|
|
Table 3 Framework for planning and reporting on RPM implementations for COVID-19 patients.