This Training Needs Assessment (TNA) adopted a cross sectional study design. The participating study sites were a set of public/government, Private Not for Profit and Private for Profit health institutions in various regions of Uganda with a known capacity for providing oncology services and care. Using this institutional inclusion criteria, 22 health facilities, as summarised in Table 1, from various parts of the country, were purposively included in the survey. The selected institutions are part of the referral network of health service delivery points for oncology under the East African Centre of Excellence for Oncology being developed with support from the East African Community.
Table 1: Description of the study sites
Type of study site
|
Number
|
Oncology treatment centre
|
2
|
Public national referral hospital
|
2
|
Public Regional hospitals
|
14
|
Private hospitals
|
4
|
A consecutive sampling strategy was used to recruit participants until the required sample size was attained. The participants were only those health care providers who were: (i) Involved in direct care of cancer patients; (ii) Present on site at the time of visit; and (iii) Provided written informed consent to participate in the survey at each of the selected study sites. The targeted sample size was obtained using the sample size calculator for proportions available from www.openepi.com [27], for the following assumptions: α=0.05, β=0.8, design effect of 1.2, 5% error and because we did not know how many health workers were aware of their training needs we used a hypothesized proportion of 50%. This gave a final sample size of 187 health care providers to which an additional 13 health workers was included as an allowance for loss and errors bringing the final targeted sample size to 200 health care providers.
The validated World Health Organisation (WHO) Hennessey-Hicks TNA survey questionnaire was used to collect training needs data for oncology services [19]. The questionnaire, is licenced to the World Health Organisation for on-line use, as a toolkit for researchers. [20]. The questionnaire has also been used to determine the training needs of several categories of health care professionals in both low, middle and high income countries [28-32]. The survey questions were developed in line with the guidance set out in the online questionnaire manual [19]. The questionnaire comprises a list of 30 tasks that are categorised under the following domains: research, communication/ teamwork, clinical tasks, administration and management. Each of these tasks is rated along a seven-point scale with respect to importance of the task and the respondent’s job (Rating A); and how well the task is currently performed (Rating B). Comparisons of the Rating A (for self-assessed importance) to Rating B (current performance) provides an indication of the gap or training need. The greater the difference in the two ratings, the greater the training need for that particular task.
There exists an allowance in the questionnaire design for the removal of up to 25% of the original tasks (a maximum of 8) in exchange for other task of interest to the researcher without compromising the questionnaire psychometric properties [19]. For this study we iteratively removed six (6) of the original items to create space for another six (6) items on various aspects of continued professional development (CPD). The tool was pilot tested on one nurse, one allied health worker and a medical doctor who each were asked whether the question items on the tool were clear to them. Random organisation of the tools task items was maintained to retain the questionnaires integrity [29]. The final list of tasks included in the survey is provided in Table 1. These were randomly presented to the participants in two sections. The first section had the listed 30 task question items for rating. In the qualitative section of the tool, participants were asked to list up to three areas in which they felt they would benefit from further oncology training. These suggestions were entered verbatim into the data collection tool by the research assistant. Additional basic demographic information, including professional group, age and gender was collected from each respondent. The questionnaire was digitised using the open data kit (ODK) software for presentation to support real time data collection and quality control using handheld data collection devices. A team of experienced data collection research assistants were recruited and taken through 5-days of training that orientation on the tool followed by repeated practice initially on the paper version, then later with the digital version of the questionnaire to ensure uniform understanding of all the question items and the consent process.
After obtaining informed consent, the research assistants helped capture the participants ratings for each of the tasks in the questionnaire. On completion each fully filled questionnaire was immediately transmitted to a central server. The information on the server was checked in real time and notifications sent to the research team of any response that was inconsistent and needed immediate attention. The final dataset was exported as an excel sheets for cleaning, recoding and eventual analysis using STATA version 15. As has been previously described [29], we too analysed the results for the whole sample and also disaggregated the data to identify differences in the needs of different professional groups. Comparisons were made, using the median values for each group, to identify overall and specific differences for the various professional groups, at both the task and domain levels.
The Wilcoxon signed ranks test and Somers-D test [33], were used to determine the significance of the differences between the importance and related performance score of each task. Somers-D was used to test whether positive differences between importance of a task or domain and performance of a task or domain tend to have higher values than negative differences [33]. In reporting the results of the Somers-D test, a non-parametric directional measure of the effect size, +1 signified that all non-matching ranks had a positive training gap for the difference in the importance and performance scores of a domain or task. A value of -1 signified that all non-matching ranks had a no need for training intervention (negative training gap), meaning that the task or domain was not important, and or they were performing satisfactorily or well thus the higher performance score. The largest positive Somers-D value was used to identify the priority domain and top three tasks for intervention, individually for each one of the three professions and the overall study population [29]. The non-parametric testing approach as opposed to the survey tool authors preferred parametric approach was used due to the tools Likert scales that make it difficult to assume that the intervals in the data corresponding to perceptions from different participants were equal [29, 34]. The participants suggested topics for CME were categorized into groups corresponding to the Hensley-Hicks questionnaire tools domains of research, communication/ teamwork, clinical tasks, administration and management. All statistical tests the level of significance was set at 0.05 and only the untransformed (asymmetric) values of Somers D were used for the results [33]. All records with missing data were excluded from analysis.