Mobile phone-based interview appeared to provide reasonably accurate data for many of the implementation indicators of iCCM and FP. Training, supervision, and most medication and contraceptive availability reports on the day of assessment were highly sensitive (sensitivity > 80%). However, the specificity was low for most indicators. Thus, the validity of phone-based survey is limited for specific information, such as reporting on supervision which includes the observation of case management, availability and long-term stock-outs of specific products and completeness of information in registers.
This study showed that ASCs can be contacted for mobile phone surveys in rural area even in rainy season, but the accessibility in-person was more difficult in remote areas in Mali; only 73.2% ASCs in total interviewed by phone were successfully reached for in-person interview. The proportions of ASCs interviewed by both approaches were considerably lower in Koulikoro and Nara than other districts – fewer than 40% of ASCs in these districts were able to be reached in the field. In Nara, the most remote among the six districts, the provider sites were spread across rural areas, which impeded the in-field interviews, and there was regional instability and terrorist violence that made it unsafe for interviewers to travel to the district. ASCs, especially in Koulikoro, were being inaccessible due to rainy season, from which Koulikoro suffered the most. The rainy season also influenced the interval between phone-based survey and in-person survey. This could be influential for the indicator of supply on the day of assessments and week-long stock-outs, even though data collectors were trained to go back in time using stock records, to assess stocks. In the meanwhile, our study was designed to bias towards remote and hard-to-reach areas, thus our results also indicated that it is probably difficult to get health services for people living the areas that our data collectors failed to reach. It is essential to take these hard-to-reach areas into account when prioritizing equity in health care provision.
The observed difference in responses between the phone and in-person interviews may have many causes. A previous study exploring the validity of phone-based interviews reported that the respondents did not understand the questions being asked on phone clearly, especially for the exact definitions of certain documents or technical terms, and this confusion is more likely to occur among ASCs with less experience [11]. Due to the relatively small sample size for analysis, subgroup analysis for inspecting differences between groups did not have adequate power. Additionally, as was reported in a previous study conducted in Malawi, data collectors may have made errors in reporting or recording data during the phone interviews [10]. This could explain the poorer sensitivity and specificity calculated for detailed data of stock-outs and register completeness. To improve the validity overall and for mobile phone-based survey instruments, evaluators may conduct a pilot study prior to the formal phone-based evaluation in a certain cultural and geographic setting, so that the pilot results can inform a better wording of the instruments.
This study was the first to test the validity of mobile phone-based interviews of health care providers in Mali. Previous studies have been conducted in Malawi [10, 11], showing relatively high validity of implementation indicators. The accuracy of phone surveys shown in this study can inform future program monitoring and evaluation plans in hard-to-reach areas.
There are a few limitations of this study. In program evaluations, the gold standard is in-person interview with available records, as it was used in previous studies [10, 11]. Yet, records are not a perfect source of data. It is hard to determine how accurate the data in the registers are, but these registers were the best resource of reference available for the study purpose. Secondly, the study was powered to report on overall indicators and not for district-specific comparisons, so it was not possible to stratify further, for example to the district level. Due to both poor mobile phone connections (mobile phone coverage of 88%) and limited in-person accessibility, the sample used in this analysis suffered from potential selection bias. We were unable to reach some ASCs via phone due poor mobile networks, and some ASCs in-person who were located in areas rendered inaccessible. Furthermore, ASCs were not asked why there were discrepancies between the two interviews, if discrepancies were found, nor did the study verify the accuracy of program documents and paper-based registry. ASCs might be less concerned about the accuracy of data they reported in mobile phone-based interviews. Thus, the difference in responses might partially result from the inaccuracy of the data collected from phone-based interviews. Despite these limitations, we were able to evaluate the validity of phone-based versus in-person interview and gained knowledge about mobile phone use in low resource and hard to reach settings with considerable mobile phone usage and network reliability.
Using in-person interview as the gold standard, data collection via phone surveys was adequate for iCCM and FP indicators of general training, supervision, routine treatment and contraceptive supply on the day of interview, but had low validity for indicators of week-long stock-outs in the past 3 months and completed registers in Mali. We recommend that local governments and organizations take advantage of mobile phone-based surveys to monitor and evaluate the implementation strength of programs, especially in areas in areas that are difficult to reach otherwise.