We conducted an individual level factorial multiple crossover randomised control trial in Mwanza, Tanzania to estimate the effects of questioning frequency, question load, and incentivisation on response rates to an SMS survey on under-five diarrhoea in urban informal settlements. The study also included analyses of the effects of demographics on the response rate; the effects of questioning frequency, question load, and incentivisation on the reported diarrhoea rate; and a qualitative examination of attiudes towards text message surveys. The principle findings of the study are that SMS messaging can be a suitable means of disease surveillance in LMICs, with response rates of around 50% in our study, but that results can be impacted by the methodologies used: financial incentivisation is associated with an increase in the response rate, increased questioning loads is associated with a decrease in the response rate, and frequent questioning with short recall periods is associated with a decrease in the reported diarrhoea rate.
The Impact of Treatments on the Response Rate
The complete response rate over all eight rounds was 47% - a proportion higher than reported in previous similar studies: 15% in Liberia during the Ebola Outbreak, and 31% in Ghana during a demographic and health survey [7, 8]. Reasons for the higher response rate include differences between the study sites, health topic, and study recruitment (with our study recruiting consenting participants, whereas the aforementioned studies randomly messaged unconsulted participants). Evidence from our study additionally indicates that daily questioning (24-hour recall) had a similar response rate to the fortnightly survey (14-day recall), suggesting that after 14 days of daily questioning respondent fatigue did not set in, as has been suggested in past studies [11]. Further supporting that fatigue did not set in, there was increase in the response rate over time.
There did appear to be a lower response rate for the 3-question survey (when compared to the 1-question survey) – suggesting that the 3-question survey was burdensome for the respondents. This is in line with past studies, including Bhavnani and colleagues (2014), who suggested that fatigue might occur if participation required a high amount of effort [11]. This is supported by our finding that when daily questioning is combined with the 3-question survey, there is an additional lowering in response rate. This reduction in response rate may have been even more apparent if regardless of response participants were given all three questions, rather than only being given all three questions if the participant reported diarrhoea.
The incentive did yield a statistically significant increase in the mean response rate. While inconsistent with the qualitative findings that incentivisation did not factor into participants’ decisions whether or not to take part in the survey, this is consistent Hopkins and Gullickson’s (1992) meta-analysis on the impact of financial incentivisation on survey response, which found that financial incentivisation increased response rate by 19% when given with the survey (prior to completion) and by 7% when given after the survey. The latter figure is similar to the 6·4ppd increase in response rate observed in our survey through provision of an incentive after survey completion [12].
The Impact of Demographics on the Response Rate
Those with higher education were more likely to respond, with those who had progressed beyond the primary stage of education responding at a rate 11·7ppd higher than those with primary education or below. This finding was also seen in L’Engle and colleagues (2018) study on demographic and health surveys Ghana [7]. L’Engle and colleagues surveyed a nationally representative sample using an eighteen question demographic and health survey to determine response rates to a mobile phone survey [7]. This study used pre-recorded voice messages in which participants would respond by inputting a certain number on their dial pad [7]. Comparing the results of the mobile phone survey to two similar nationwide surveys which used face to face surveying, the study estimated that populations with no education answered the mobile phone survey at a rate 5 to 18 ppd less than a face to face survey [7]. The study also estimated that populations with secondary education or above answered the mobile phone survey at a rate 27 to 29 ppd higher than a face to face survey [7]. L’Engle and colleagues conclude that while mobile phone surveys are a promising tool for data collection, differential response rates by varying demographics could introduce bias if adjustments were not made.
The Impact of Treatments on the Reported Diarrhoea Rate
Diarrhoea was reported in 36% of complete child-rounds – yielding an incidence of 9 episodes per child year. While this number is slightly higher than previously reported in urban East Africa, the finding can be explained on the basis that all participants in the previous studies were presented a 14-day recall period [13]. When restricting the analysis to the 14-day recall period, we estimated an incidence of 6 episodes per child year, in line with previous studies. This is considerably lower than the estimated incidence of 13 episodes per child-year for 24-hour recall. The higher diarrhoea rate estimated for the daily survey with 24-hour recall, when compared to the fortnightly survey with 14-day recall, provides support of recall bias, whereby respondents forget events that occur over long periods [14, 15]. Feiken and colleagues (2010) report prevalence dropping from 18% for 24-48 hour recall to around 5% in 11-13 day recall [14, 15]. Zafar and colleagues (2010) report that severe diarrhoea is twice as likely to be reported as moderate diarrhoea during longer recall periods [14, 15].
Incentive and survey type did not influence reported diarrhoea rates. Of interest, however, reported diarrhoea rates did decrease markedly over subsequent rounds. We hypothesize three (non-exclusive) reasons for this. First, the survey may have created a heightened awareness of diarrhoea risk and child health (as reported in the qualitative work), resulting in better WASH practices; second, respondents may have been embarrassed by constantly reporting diarrhoea [16]; third, respondents may have telescoped answers at the beginning of the survey (recalling from a longer period than the stated recall period).
Strengths and Weaknesses
There are two substantial weaknesses in this study. 1) As the questioning frequency treatment included variation of both frequency and recall period – with fortnightly questioning asking about the past fortnight, and daily questioning asking about the past day, it is not possible to determine if the differences associated with this particular treatment were due to the frequency or the period of recall. 2) The study was unable to ascertain the impact of perception bias and if participants truly understand what defines a case of diarrhoea. For example, in a previous study, Voskuijl and colleagues (2017) found that parents of infants with severe acute malnutrition in a Malawian hospital were only able to identify 75% of loose or watery stools as such (loose or watery stools being identified by observation by a health care provider) [17].
This study has several strengths. The study took place in an urban East African city with a fairly representative culture and geography of other urban East African areas, so we believe that the results are generalisable to similar settings. Further, the study data provides results which are not only consistent throughout the study, but also build of past literature. Bhavnani and colleagues (2014) discussed the possibility of respondent fatigue through frequent, in depth, questioning which we provide evidence for [11]. Hopkins and Gullickson (1992) found evidence that incentivisation is associated with increased response rates, which we also find evidence for, but in the novel form of an SMS survey [12]. Similarly, Feiken and colleagues (2010) and Zafar and colleagues (2010) found evidence for recall bias during in-person surveys for diarrhoea, which we also see in our novel SMS survey [14, 15]. Finally, L’Engle and colleagues (2018) found a substantial association of demographics, such as education, on response rate in their SMS survey, but, due to their use of uninformed participants, had a low response rate [7]. Our use of informed partipants resulted in a higher response rate.