Aims
The purpose of the evaluation was to enable scrutiny of programme activities conducted between 2008 and 2018 by the Child Nursing Practice Development Initiative (CNPDI) at the University of Cape Town, to inform programme review and where possible to draw wider lessons in the context of the needs and challenges described above. A description of the programme is provided below (see Setting). Operational commitments meant the programme had not previously conducted a formal programme evaluation, and had a database holding 11 years of data yet to be explored in detail. This included limited follow up information about graduate situations post-training.
In relation to the policy-making and decision-making context for the evaluation,24 the evaluation questions that the CNPDI programme team initially wished to understand related to three main areas: demand for training; throughput of trainees; and the location and utilisation of graduates.
The aim of the study therefore, was to generate an overview of training activity and student throughput attributable to CNDPI programmatic activities between 2008-2018, in order to allow consideration of the three evaluation issues identified.
Objectives were to:
1. Identify appropriate data sources to enable examination of the evaluation issues.
2. Present numerical and visual representations of these data relating to:
- training activity and throughput (student enrolment, course completions)
- employment status and practice environment of graduates.
3. Generate recommendations for further programme monitoring and evaluation and note implications for wider learning of relevance to programmes seeking to strengthen the specialist nursing workforce in Africa.
Design
The study took the form of a descriptive programme evaluation. Multiple methods were used. The majority of evaluation activities involved analysis of secondary quantitative data from two sources (see ‘Processes’). Member checking25,26, usually applied within qualitative research, was used as a complementary method because of the importance of accurately understanding the context in which the programme was operating27.
The overall design of the study and selection of methods was informed by the evaluation questions and context articulated by the CNPDI programme team in dialogue with the researcher24. A brief narrative programme description, including logic model,28 and a summary of research processes are provided in the sections that follow.
Setting
The CNPDI is a nurse-led academic programme based in the Department of Paediatrics and Child Health at the University of Cape Town, South Africa. In 2006, the Initiative was tasked with re-establishing children’s nursing training at the University. Subsequent donor funding agreements mandated the programme to provide education and capacity building intended to lead to the creation of new education programmes in targeted countries. These countries have to date included Malawi, Botswana, Namibia, Zambia, Zimbabwe, Uganda, Kenya and Ghana. Children’s nursing education programmes at the University of Cape Town are offered collaboratively under the governance of the Division of Nursing and Midwifery as the accredited School of Nursing. Educational programmes offered at the University of Cape Town include a one-year Post-Graduate Diploma in Child Nursing (PGDip-CN); a one-year Post-Graduate Diploma in Critical Care Child Nursing (PGDip-CCCN); and a two-year professional Master of Nursing in Child Nursing programme (MNCN) (commenced 2016).
The programme developed a Theory of Change shortly prior to the design of this evaluation study, which provided the logic model for the evaluation29. CNPDI’s stated long-term goal is a strengthened children’s nursing workforce, working to best possible effect in Africa’s health care systems. The anticipated outcome is that this will indirectly contribute to progress towards UHC and lead to improved infant and child health outcomes in Africa.
At the time the programme commenced activities, South Africa was the only country in the region providing specialist children’s nursing training. Across South Africa as a whole, in 2018 there were reported to be seven training institutions offering 11 different programmes and producing approximately 180 children’s nurses per year12. Regulatory changes involving the phasing out of “legacy qualifications” are likely to have reduced the number of training providers from 2019, but the extent of this reduction is not yet known30. During the period of study the University of Cape Town was the only provider of critical care children’s nursing education in South Africa and one of only two providers serving southern and eastern Africa. Information regarding the extent of international enrolments for children’s nursing training programmes at South African universities is not available, but it is believed that international enrolments have declined. In recent years the University of Cape Town has been the only South African university training children’s nursing students from other African countries12,31.
Participants
The study mainly involved secondary data analysis of quantitative information. Two CNPDI team members (the Programme Manager and the Research Programme Director) participated in member checking25,26 of data presentation to ensure accuracy, and commented on emerging findings.
Processes
Evaluation research processes are described following the reporting headings recommended by Clarke32 in relation to: data sources; indicator selection; data analysis and data presentation.
Data sources
Two sources of data were defined, in relation to programme information and contextual information. For programme information, the programme already maintained a database which routinely recorded information about training activity. It was determined that critical examination and analysis of this information, under approved conditions to ensure ethical research conduct and compliance with data protection requirements, would be a suitable method through which to acquire the majority of data relevant to the pursuit of the study aims and evaluation questions. In relation to contextual information, it was determined that publicly available information from documentary sources including government websites, national data sets, institutional websites and online academic placement finders would be used to enable consideration of programme activity in context, for example the type and sector of health facilities in a given country.
Indicator selection
The evaluation questions described above were used to structure selection of indicators against which data would be analysed. The programme database included information on all student enrolments over the last 11 years by age, gender, nationality, year of registration, course selection, funding source and graduation date. The WHO’s National Health Workforce Accounts (NHWA)33 minimum data set recommendations were also consulted as a reference to inform the selection of indicators. This process led to the inclusion of five additional fields, with indicators then refined by the introduction of a threshold completion rate (>80%) to ensure that analysed data would present an accurate picture of programme throughput. This led to the exclusion of a number of fields, as shown in Table 1. Table 1 shows the data fields initially identified through the programme database, and the final set of indicators that were included after appraisal, with sources.
To structure the analysis of the publicly available contextual information, a common data set was constructed for each country identified in the programme’s database, depicting the tiered structure of each national health system and recording the number of facilities at each tier. This was compiled through structured searching of publicly available documentary information including government health strategies, which were available for all countries in the study.
Member checking
Member checking25,26 was used to check the congruence of analysed data with programme team members’ experiences34. Corroboration through member checking also assisted the researcher in gaining deeper understanding of the wider programme context and specific situations27,35. The rationale for using member checking aligned with Boaz et. al’s recommendations for stakeholder engagement in research, with the intention of leading to co-creation of accurate and practically useful information36. The two programme team members were given drafts of the findings on two occasions and asked to verify their accuracy, as well as to challenge, clarify or elaborate on emerging interpretations of the data. They were also provided with a draft of the near final report and asked to comment on and verify the findings and interpretations. Member checking took the form of written comments and telephonic conversations, after which amendments were made to the draft.
Database reports comprising anonymised structured quantitative programme data as specified in Table 1 were generated by CNPDI programme staff and made available to the researcher. Data analysed covered all student enrolments between January 2008 and December 2018 including age, gender, nationality, year of registration, educational programme, funding and graduation date. Analysis of data was conducted using Excel, with geographical locations of facilities plotted using Google Maps.
Data presentation
Data were presented in the form of numerical reports for all indicators. Data visualisations were produced for each of the three evaluation questions, showing geographic and demographic data. A comprehensive report was produced and stored in an online repository with a persistent link37.
Reliability and validity
Reliability and validity were supported by careful selection of indicators and the imposition of an 80% field completion threshold. Member checking and corroboration of data with programme staff supported greater accuracy of data presentation and interpretation25,26,34. Availability of data from an eleven year period supported evaluation of programme outputs and outcomes beyond the immediate term17,18.