The North West province is one of the nine SA provinces and consists of four districts: Dr Kenneth Kaunda, Ngaka Modiri Molema, Dr Ruth Segomotsi Mompati, and Bojanala Platinum District Municipalities. This province is among the worst performers with regards to maternal health outcomes and health system indicators(27). However, the province was also one of the pioneers of primary health care quality improvement and health system strengthening initiatives, such as the Ward Based Outreach and the primary level Ideal Clinic realization programs (28)(29). Thus, the province was expected to have a broader set of available indicators across the continuum of care compared to others.
We used a step-wise approach to develop and test the proposed index, based on current methodological guidelines (30,31). The main steps include: i) defining a theoretical/conceptual framework, ii) selection of variables/indicators, iii) imputation of missing data, iv) multivariate analysis, v) scaling of indicators, vi) weighting and aggregation, vii) checking for robustness, and viii) validation (30).
A critical interpretive synthesis of current measurement and monitoring approaches in LMICs found a gap in multi-dimensionality of sets of indicators currently used to assess the continuum of care (COC) for maternal health (16). The adequacy construct was therefore defined, which outlines four important dimensions to consider: 1) access and utilization of care; 2) quality of care; 3) linkages between levels and packages of care; and 4) social determinants of health (16) (Fig 2). The adequacy construct complements the COC framework by adding elements of quality of and linkages to care, and proposing that all four dimensions be monitored, not just access and or utilization. Indicators of service delivery across all dimensions should therefore be sought from local data sources, with consideration for their relevance, feasibility and validity(21,22).
Selection of variables/indicators
A previous study assessed the suitability and measurement gaps of potential indicators for service delivery across the broad continuum in SA(21). These indicators were extracted for the North West province and districts for the period 2013-2017. Health service indicators were sourced from the National Indicator Data Set (NIDS) of the District Health Information System (DHIS). The DHIS is used to report and monitor facility level data for health services to support policy and planning(32). The DHIS provided indicators for access and utilization, linkages, and quality of care dimensions of the continuum of care.
Social determinant indicators were sourced from the annual General Household Survey(33) (GHS) (2013-2017)(33), Census 2011(34) and Community Survey (CS) 2016 (35). The census and CS enabled assessment at district level, even though they offer fewer social determinants of health indicators than the GHS. The census and CS provided indicators of literacy, housing, access to electricity, water, and sanitation at the district level. Additionally, the CS also assesses dietary behaviour and empowerment, but this was not included in the final analysis of performance to allow district level comparison with the census indicator set. A description of all indicators used in this study is provided in S1 Table. Indicator data were extracted, cleaned, and analysed in MS Excel 2010, R v3.6.1 and STATA 14.0.
Imputation of missing data
Health service indicator data may be missing due to lack of services and under-performing systems for data collection and reporting. These are systematic issues that are considered to affect availability of data for indicators completely at random. As such, we conducted single value imputation using the indicator value observed from the adjacent year(36). In the Results section we discuss the impact of the remaining data gaps on the index findings. Single value imputation was also applied to indicators from community survey and census to allow calculation of index at district level.
We used exploratory factor analysis to assess dimensionality of the data, in order to compare the statistical and conceptual groupings of indicators (30). We assessed whether the data fitted the four dimensions of continuum of care proposed by the adequacy construct. The output of the exploratory factor analysis is assessed in the Results section.
Scaling of indicators
We conducted a linear transformation of indicator values on a scale between 0 and 100 (37) (Equation 1).
The indicator score is calculated on a scale between 0 – 100; the ideal score is the maximum attainable score, which is a 100; the target is the ideal performance of the indicator; and performance is the observed value of the indicator during a given time period. Targets may consist of a range of values and in such a case we calculated the median score to represent indicator performance. Targets were also based on national policy documents and global technical or scientific guidelines (30,31,37). Targets were set to the conservative maximum (100%) where guidelines were unavailable. The difference between target and observed performance is multiplied by 100 because indicators are originally measured as percentages/proportions. Using targets for performance improves the meaningfulness of the index and its’ role in policy discourse(38).
Weighting and aggregation
The comprehensive continuum of care for maternal health index (C3MH index) was computed as a geometric mean of equally weighted sub-indices reflecting the four adequacy dimensions. We chose equal weighting since this was estimated the most reasonable approach, and evidence on the relative importance of each sub-index is lacking (30,31,39).
Simple indices, based on arithmetic and geometric means, can be robust and give valuable information about public health or health system performance (24,25,39–41). Unlike the arithmetic mean, the geometric mean allows for a degree of non-compensation of performance of one indicator by another(30,31). Each sub-index (e.g. access to care) was also formulated as a geometric mean of its indicator scores.
Where a, b, c are individual indicators and n = number of indicators within the sub-index.
Validity and Robustness
We ran sensitivity analyses comparing index performance with different indicator combinations and normalization methods (42). We tested if z-score standardization leads to a shift in district ranks (30). Index performance was also compared after removal of indicators that were considered outliers (performed close to 100 %), missing data, or indicators that could be represented by a proxy (e.g. syphilis treatment measured with one indicator instead of the three across the treatment cascade, see S1 Table Indicators 4-6). Index aggregation by arithmetic and geometric mean was also compared. We assessed the median absolute difference in district ranks, and its inter-quartile range, testing alternative approaches to index formulation (42). External validation of the index was conducted by exploring its relationship with indicators of public health performance and maternal health outcomes, particularly the Human Development Index (HDI) and maternal mortality rates (37,43,44). Confidence intervals for correlations were calculated by bootstrapping methods in R v3.6.1.