Setting and Data Source
The data for this study were collected by the Measurement, Learning & Evaluation (MLE) project for the impact evaluation of the Nigerian Urban Reproductive Health Initiative (NURHI). NURHI, funded by the Bill & Melinda Gates Foundation, aimed to increase modern family planning use among the urban poor through a multi-pronged approach that included improving the quality of family planning services in high-volume urban health facilities. The intervention supported contraceptive supply chains and logistics, training for family planning counseling and provision, and facility level management systems. This study utilizes health facility (N= 400, 58% hospitals, 42% primary healthcare centers) and healthcare provider (N= 1,479) baseline survey data that were collected in 2011 from Abuja (Nigeria’s capital), Benin City, Ibadan, Ilorin, Kaduna and Zaria (34). NURHI selected these cities because they include both northern and southern regions of the country and each has a population of approximately, or more than, one million. The northern and southern regions of Nigeria differ in their cultural, economic, and religious characteristics; the north is poorer and predominantly Muslim while the south is more affluent and predominantly Christian.
Two categories of healthcare facilities are included in the sample: high-volume facilities (HVF) (n = 112) and preferred- provider facilities (PPF) (n = 288). HVF and PPF can be either public or private, and they can be either primary or secondary facilities. The sample includes all public facilities in the study cities. HVF, generally the top service delivery sites by client load, offered both antenatal care and immunization services; these facilities served more than 1,000 antenatal clients per year. The NURHI program supported all HVF, and all of these facilities are included in the sample (34). PPF were identified from a baseline household survey conducted by MLE that contained a representative sample of 16,144 women aged 15-49. Women were asked the name of each facility where they went for family planning, maternal health, and child health services. Using this information, MLE created a list of facilities that women reported by study cluster (primary sampling unit). The most commonly mentioned facility in each primary sampling unit was categorized as a PPF. If the PPF was already included in the sample as a public facility or an HVF, then the second most-commonly mentioned facility was included. If the second most-commonly stated facility was already in the sample, no additional facility was included. Including the PPFs along with the public facilities and HVFs ensures that the sample includes facilities that women in these urban areas actually visit.
This study utilizes instruments developed for the NURHI impact evaluation, which draws upon validated tools selected from the Quick Investigation of Quality (35). Facility and provider surveys were conducted in each facility by trained interviewers hired by Data Research and Mapping Corporation; the MLE project provided technical assistance for training of interviewers. The surveys collected information on the readiness of facilities and providers to offer integrated services, usual or ‘normal’ family planning service provision practices in specific circumstances (e.g., usual or ‘normal’ practice within the facility if a woman has come for a child health service visit and is interested in receiving a hormonal method of contraception), gaps in commodities, equipment, training and resources, the extent of family planning integration into maternal, newborn and child health services, and other health facility characteristics. One facility audit was conducted per facility by asking questions of a manager or another administrator. In larger facilities, four providers were selected through simple random sampling to complete the provider survey; in the event that a provider declined another provider was randomly selected until four eligible providers consented to interview. In facilities with four or fewer providers, all were approached for interview. Providers eligible for inclusion were medically qualified to provide clinical services and assigned to provide direct family planning and/or maternal, newborn and child health services to clients at that facility; their responses are analyzed in this study.
This study employs Principal Component Analysis (PCA) to create two family planning and child immunization services integration indexes: a Provider Integration Index and a Facility Integration Index. All analyses were conducted using Stata version 13.1 (Stata Corp, LP, College Station, Texas).
Constructing and Interpreting the Indexes
Selection of Variables for Inclusion in the PCA
Drawing on Mayhew (2016), we posit that numerous characteristics and processes interact within a health facility to result in varying degrees of integrated service delivery (30). While the Nigerian Ministry of Health does not provide a specific definition of integrated family planning and immunization services, their 2008 National Guidelines for the Integration of Reproductive Health and HIV Programmes offers this explanation:
Integration in the health sector has been defined by offering two or more services at the same facility during the same operating hour, with the provider of one service actively encouraging clients to consider using the other services during the same visit, in order to make those services more convenient and efficient. Integrated services should be offered at the same point but where that is not possible, strong referral systems are required to ensure that clients receive high quality service (36).
NURHI’s Strategy for Integrating Family Planning into Maternal, Newborn, Child Health and HIV/AIDS Services references this guidance (37). This study also refers to this guidance to inform the attributes measured in the indexes. Additionally, we reviewed the integration literature to identify facility-level attributes that support service integration. Several critical attributes emerged, including a) facility norms that support concurrent service provision (e.g., operational management standards and procedures that support the availability of both child immunization and family planning services at the same consultation or on the same day), and b) provider capacity to offer multiple services (e.g., provider(s) has the skills and willingness to offer family planning information or services during a child immunization visit) (26, 30, 38-41).
Because this study was conceptualized after data collection, we leveraged the available data and selected eight indicators for inclusion in the indexes (Table 1). Table 2 describes these and other facility characteristics. A few of these indicators warrant additional explanation. The indicators used to develop the integration indexes focus primarily on family planning information and services that are provided during the child immunization visits. The indexes are thus most appropriate for use within that context, though they do also measure a range of referral scenarios. Improving outcomes through integration relies upon both high coverage and quality of integrated services. A substantial body of research links higher quality family planning services with increased contraceptive adoption, prevalence, and continuation; poor family planning service quality can hinder use (42). Therefore, the level of quality provided and the absence of barriers that limit coverage and quality are essential indicators of effective integration (40, 43). We analyzed quality of integrated family planning services by measuring the range and breadth of family planning topics that providers discuss with a client during child health service visits. Because the extent of integration can be influenced by provider bias (44-46), we include social norm-based service barriers by measuring the extent to which providers at a facility require partner consent prior to provision of a family planning method during an integrated visit. In Nigeria, partner consent is a tenacious barrier to contraceptive use that may be mitigated by training providers that standard service provision guidelines do not include a requirement for partner consent, providing supportive supervision on guideline implementation, and utilizing more comprehensive behavior change approaches with providers (47). While numerous such barriers exist and could have been employed in the indexes this is the only variable in our dataset that captures such barriers to family planning specifically during immunization visits.
Several variables refer to child immunization, child growth monitoring, or child health service visits. Child health services visits include either immunization or growth monitoring visits, but not sick child visits. In variables referring to child health services, it was not possible to differentiate data pertaining only to child immunizations from data pertaining only to child growth monitoring. However, child immunization visits comprise the vast majority of all child health services visits. In the concurrent health facility client exit interview, 1,714 attended the facility for a child health service visit. Of these, only 90 (5%) report that child growth monitoring was the primary purpose of their visit while the remaining 95% reported immunization as the primary purpose of their visit. Facility-level variables are based on a summary of provider responses. Means were imputed for missing data.
PCA was applied following the selection and transformation of variables. Input variables were standardized to a mean of zero and a standard deviation of one prior to the analysis to prevent variables with greater variance from dominating each component. The Kaiser-Meyer-Olkin (KMO) test of sampling adequacy was used to ascertain the suitability of the data for use in a PCA. Our KMO test yielded a score of 0.8, indicating sampling adequacy for each variable and the complete model. Based on evaluation of the eigenvalues (Table 3) and the scree plot (Figure 1) we retained two components. The factor loading scores (see factor loadings column in Table 4), which show the correlation coefficient between each variable and component, were examined to determine which dimensions of integration are represented by the components. The scores confirmed the anticipated dimensions: provider integration and facility integration.
Creating the Indexes
We constructed the Provider Integration Index and Facility Integration Index using weights calculated for each of the variables by dividing its factor loading by the sum of the factor loadings of all variables in that component (see weights column in Table 4). Next, we multiplied the variables included in each component by their associated weights and summed the values. Finally, we calculated the Provider Integration Index score and the Facility Integration Index score for each facility by multiplying these values by ten. The indexes thus range in value from zero to ten, with a higher score indicating a higher level of integration. Each facility was classified as having “low integration” (index score 0 - 3.29), “medium integration” (3.30 - 6.59) or “high integration” (6.60 -10.00). These classifications were determined by dividing raw scores equally into tertiles along the score continuum of zero to ten. A sensitivity analysis was conducted to identify the effects of excluding from the sample those facilities that do not offer child immunization (n=61); there were no statistically significant differences between the indexes that include all facilities versus those with the restricted set of facilities. We retained these facilities in the sample because one goal of the paper is to assess integration across the range of facilities and circumstances represented by our sample. Excluding these facilities would prevent us from knowing the full extent of integration across our sample. Additionally, one key benefit of developing these indexes is the ability to apply them to understand the effects of integration on health and service delivery outcomes. Having a score for facilities that do not offer child immunization allows future research to better identify correlations between level of integration (even very low level) and other outcomes.
Index Coherence and Robustness
Following Filmer and Pritchett (2001), we assessed the internal coherence and robustness of the indexes (48). We examined internal coherence by comparing facility characteristics and index scores across low (0-3.29), medium (3.30-6.59), and high integration groups (6.60-10). We assessed robustness by examining how the classifications of facilities having high Integration Index scores changed when different sub-sets of variables were entered into the PCA. To assess the robustness of the Provider Integration Index, we ran 6 variations of the PCA. The first variation (“base case”) included all variables. Each subsequent model omitted one of the provider integration variables. Similarly, to assess the robustness of the Facility Integration Index, we ran 4 variations of the PCA. The first variation (the “base case”) included all variables. Each subsequent model omitted one of the facility integration variables. Looking only at the sites classified as “high integration” in the base case, we examine the impact on classification when we omit one variable at a time from the PCA. As an additional robustness check, we also created the indexes using Exploratory Factor Analysis to ascertain whether these indexes correlate with those created using PCA.