Research Design
This cross-sectional study used a mixed-methods approach that combined a modified Delphi process to reach consensus on a list of ‘observable clinical activities’ for the assessment instrument. Thereafter, the instrument was validated for use in the setting.
Setting
The study was conducted at the seven major hospitals that train interns in Durban and Pietermaritzburg in the KZN province of SA. The catchment population of these hospitals is approximately 6 million and serves communities with the highest HIV disease burden rates globally with high rates of Tuberculosis (TB) as well as common childhood diseases (3,18,19).
Subjects
Paediatricians, intern clinician supervisors and interns working within the seven major hospitals were sampled to develop a locally validated assessment instrument for the study. Specific criteria were used to ensure participants were experienced in both paediatric clinical care and intern supervision within this high disease burdened context.
Procedure
Ethical approval for the study was obtained from the Biomedical Research Ethics Committee of the Higher Education institution and permission was obtained from the various institutions and the Health Research & Knowledge Management Subcomponent of the KwaZulu-Natal Department of Health (KZNDoH). Written informed consent was obtained from all participants before each step of the study. Table 1 describes the three major steps in the methods used, the sample population and the process followed at each step.
Table 1: Steps in the development of the Paediatric Internship assessment instrument
Step
|
1
Modified Delphi consultation
|
2
Content Expert
Focus group
|
3
Survey
|
Sample size (n)
|
15
|
12
|
411
|
Sample population
|
Senior intern supervisors
|
Heads of paediatric units in SA served as content experts
|
Interns at various stages of their Paediatric internship
|
Process followed in each step
|
3 round iterative process to develop a set of detailed of observable clinical activities
|
Content validation process of the final set of observable clinical activities
|
Psychometric analyses of the final instrument using intern self- assessment scores of competency levels
|
Process followed and statistical analyses in each step
Step 1: Modified Delphi process to create the list of items for the assessment instrument
A modified Delphi approach was used In step 1with multistep iterations between the authors and a panel of the experienced intern supervisor’s. The modified Delphi methodology allowed for a process to establish and confirm a locally determined consensual list of observable clinical activities as required for independent clinical practice in paediatrics (20-22).
Sixteen senior paediatric intern supervisors, all from the sampled hospitals in Durban and Pietermaritzburg, were invited to participate in the modified Delphi consultation. All the expert participants who were sampled were actively involved in intern paediatric training and had more than ten years of clinical experience in the SA public health system.
Prior to obtaining informed consent, clarity was reached between the main investigator and participants regarding the principles of the Modified Delphi process being used, specifically the anonymised consultation and the steps required to ensure consensus building(20). An initial list of suggested ‘observable clinical activities’ that had previously been validated with junior medical practitioners was used in the first round of the modified Delphi consultation to create a template for building the new instrument (23). (Appendix A lists the 41 items from the original Hill et al., 1998 questionnaire)
Figure 1 provides the details of the steps in each round of the Modified Delphi process
Step 2: Content validation process
A focus group consultation was held in step 2 with 12 content experts who had been invited to participate in this study. All content experts were invited, based on being a paediatric department head in a KZN hospital (including the sampled hospitals) as well as district and provincial heads of paediatrics. All content experts had more than ten years of clinical experience in paediatrics in SA and were additionally involved with programmes to improve paediatric care at either an international, national or provincial level.
After a formal introduction to the study and informed consent was obtained, all content experts in the focus group were provided with the final list of ‘observable clinical activities’ as determined by the Modified Delphi process. The participants in the focus group were tasked with evaluating the relevance of all the ‘observable clinical activities’. A Likert four scale based rubric was developed and used for scoring each item in the overall construct of paediatric intern competency in SA (24).
A content validity index (CVI) using this four-point Likert rating scale (Not relevant =1; somewhat relevant =2; relevant =3 and very relevant =4) was done for each item and dichotomised to either relevant or not relevant. The content validity for each item was then computed and items were accepted for inclusion in the assessment instrument-based on a CVI of >0.9 (25). The main investigators then collated a final list of Items into an assessment instrument for use in the survey amongst interns in step 3.
Step 3: Survey
In step 3, interns representing all the seven regional intern training hospitals were invited to participate in a survey using the developed instrument from step 2. Using a five-point Likert scale, each intern was asked to self-assess their level of competency. The five-point Likert scale is based on the five-stage entrustment scale as developed by ten Cate et al. (13). After obtaining informed consent, the survey was group administered to interns at all sampled hospitals. Demographic data on gender, age, university origin and year of internship per participant was also solicited.
Descriptive statistics were calculated based on the overall score for each of the items. For each ‘clinical item’ means were computed based on available data provided. Missing data did not exceed 20% of the items. The mean summarised continuous variables with standard deviations and medians.
Factor analysis
The newly developed assessment instrument was validated for the use in our local context by studying its psychometric characteristics and internal consistency. The sample had to exceed 100 for the factor analysis purposes to be representative of the general intern population in the province. The achieved sample size for this step was 415. The required observation to item ratio of 5 to 10 observations per item equated to 5.5 observations per item for a 75 item scale.
We investigated the internal structure, especially the construct validity, by applying factor analysis with varix (orthogonal) rotation to determine the underlying dimensions of the data. The Kaiser-Guttmann Eigenvalue criterion of >1, the Cattell criterion of accepting factors above the point of flexion on the screen plot and the proportion of the total variance explained (60%) was used to determine the underlying factors. Factor loadings of >0.4 were interpreted. Cronbach’s alpha coefficient was used to assess the reliability and internal consistency. Data analysis was carried out using SAS version 9.4 for Windows (26).