Validation of the Palliative Competency Tool: An Instrument To Qualify Medical Education (Validation of Palliative Competency Tool)

Introduction: It is necessary to determine which competencies in Palliative Care (PC) have been acquired and which ones need to be improved. The aim was to develop and validate an instrument to assess the acquisition of competencies in PC care among medical students denominated Palliative Competency Tool (Pallicomp). Materials and methods: It were consisted of developing statements based on the competencies as described by the European Association for Palliative Care. The content was validated by experts, respecting the Delphi methodology. The instrument was applied to a group of medical students (n=71) enrolled at the nal of 8th semester and statically validated. Results: Of the 30 questions developed, 24 were rened and approved by experts. The tool was applied to 71 medical students, only 12.7% had excellent performance and four out of ten competencies were underperformed. The statistical validation consisted of Bartlett's sphericity test, which showed adequate correlation for factor analysis (p <0.001); the Kaiser-Meyer-Olkin test for sample adequacy (0.7), and Conbach's alpha coecient for the internal consistency (α = 0.7). Conclusions: Can be concluded that it was possible to construct and validate Palliative Competency Tool (PalliComp), an instrument of qualication of competencies acquisition of PC.

. Implement comprehensive care coordination and interdisciplinary teamwork in all contexts where palliative care is offered; 9. Develop interpersonal and communication skills appropriate to palliative care; 10. Promote self-knowledge and continuous professional development.
PC are a part of medical education and it would be appropriate to identify which competencies were acquired and which could be improved during medical undergraduation. It was not identi ed in the literature an instrument that allowed the quali cation of the acquisition of competencies during the medical graduation. Thus, the aim of this study was to develop and validate the Palliative Competency Tool (PalliComp) to qualify the acquisition of PC competencies among medical students.

Method
The study was conducted according to international research standards involving human subjects and was approved by the Ethics Committee in Brazil. All study participants were oriented and signed an informed consent form. The instrument was developed between January and February 2019, content validation between February and May 2019, the pilot study in June 2019 and the statistical study between August and October 2019.

Instrument Development
The study began with the construction of the PalliComp instrument that had the content based on the core competencies described by Gamondi et al. [8,9]. Only current scienti c content was included and the objective was to develop an easy-to-understand instrument that would allow self-administration and time to complete less than 30 minutes.
Thirty-two questions were written (U.B.P.G.) that addressed all competencies. The questions were purposely written with correct and incorrect content and randomly distributed. Negative or double negative issues were avoided. The answers consisted of a ve-point Likert scale (strongly agree, agree, do not agree or disagree, strongly disagree and disagree). The material was jointly reviewed by the researchers (U.B.P.G., C.C.P. and J.E.S.), revised and 30 questions remained.

Content Validation
The Delphi methodology was chosen for content validation, as it's an useful methodology for content establishment, systematically and structured to evaluate expert opinion, such as the content of an instrument [10][11][12][13]. The Delphi methodology was limited to a minimum of two steps and a maximum of ve steps.
The panel of experts were composed by university professors specialized in PC from all the country and they evaluate the instrument. The inclusion criteria of the specialists were: 1) being a brazilian native, 2) being a physician with PC training, 3) being nominated by the directors of the Academia Nacional de Cuidados Paliativos (society that aggregates PC in Brazil) and 4) being a university teacher for at least 1 year. The initial contact was by telephone and those who agreed to participate had up to twenty days to answer each validation step with weekly email reminders. If the expert did not participate in one step, he would be excluded from subsequent steps. Experts received the instrument in electronic format (SurveyMonkey Inc., USA) and evaluated it individually without interference from other participants.
The experts evaluated each item of the instrument in terms of writing (1 to 3 points), content adequacy with the related competency (1 to 4 points) and intention to maintain the item (1 to 3 points). The experts evaluated individually and were also able to propose corrections, exclusion or suggestions. And each item would be disapproved if it received an average rating <70 and approved if the score ≥ 70. For nal approval, the instrument should be completed in less than 30 minutes and an average rating ≥ 70.
The results of each of the step were anonymously evaluated by the researchers (U.B.P.G. and C.C.P.). The expert's contributions that both researchers agreed were incorporated into the instrument. The disagreements were evaluated thru the following criteria: 1) scienti c adequacy and 2) best approach to the competency. The adjusted instrument was submitted again to a new step in which the experts evaluated the adjustments in the same scale than before. This process was repeated until the items were approved (score ≥ 70). The last step consisted of evaluating the nal version allowing only pass or fail. The nal instrument had 24 items.

Pilot Study
After the construction of the instrument and content validation, a pre-pilot study was carried out in which three second-year medical students completed the questionnaire and pointed out all terms they did not understand. The terms were adjusted in agreement with the researchers (U.B.P.G. and C.R.P.) and the students, without changing the content of the tool. The data obtained at this stage could not be included in the pilot study.
In the pilot study, 80 medical students enrolled at fourth-year were invited to complete the instrument. Of these, 71 met the inclusion criteria (to be a native brazilian, to be over 18 and agreed to participate in the study). Each student had up to 30 minutes to individually complete the instrument without consulting other materials. The instruments obtained were corrected and each item received a score of +1 (totally correct), +0,5 (correct), 0 (neutral alternative), -0,5 (incorrect) and -1 (totally incorrect), allowing a score between +24 to -24 points. For quali cation of academic performance, those who obtained score < 50 were considered insu cient, ≥ 50 e <75 su cient and ≥ 75 excellent.

Statistical method
Data from the content validation step and pilot study were extracted in Excel® spreadsheet with frequency, mean and percentage studies and submitted to statistical analysis by the R Statistical Software version 3.4.4 (R Core Team 2013). The signi cance level adopted was 0.05.
Bartlett's test of sphericity was performed to assess whether the correlation was adequate and should show p <0.05 to proceed the analysis. Bartlett's test of sphericity tests the hypothesis that the correlation matrix is an identity matrix, which would indicate that the variables are related, and therefore, suitable for structure detection. Sampling adequacy was assessed using the Kaiser-Meyer-Olkin test which indicates whether the correlation patterns are relatively compact and sparsely dispersed. The Kaiser-Meyer-Olkin is a statistic test that indicates the proportion of variance in the variables that might be caused by underlying factors. Values close to 1.0 generally indicate that a factor analysis may be useful with the data and if the value is less than 0.50, the results of the factor analysis probably won't be very useful. The internal consistency of the instrument was assessed using Cronbach's alpha coe cient, which estimates the reliability of the instrument and the correlation between responses. The alpha coe cient is considered adequate between 0.6 and 0.95. Finally, factor analysis is a statistical method used to describe correlated variables in terms of a potentially lower number of unobserved variables called factors. In order to facilitated, can be used factor rotation and the factors were extracted according to the Kaiser criteria (or eigenvalue above 1) and con rmed by the Scree plot [14,17].

Content Validation
We identi ed 33 specialists who met the inclusion criteria. Of these, accepted to participate 24  In the rst step of the Delphi methodology, the instrument despite having received satisfactory evaluation, all experts disapproved due the excessive time required in lling and evaluation (over 1 hour). The instrument was fully adjusted, reduced to 24 items and therefore the notes of the rst step could not be compared with the subsequent steps. Then was evaluated in the second and third steps contained 24 statements. Final approval was obtained in the fourth step (Table 1). The pre-pilot changed thee words and none statement. Seventh-one students agreed to participate in the pilot study.
The average age was 23.4 ± 2.6 years, 30 (42.3%) were men, 41 (57.7%) women and only 17 (23.9%) attended the elective PC course. Student answers and the score obtained for each item are described in Table 2 and Figure 1.  Figure 2).

Discussion
Knowledge in PC is essential for the doctor, because the general practitioner will take care of patients facing terminality in the daily practice. Several recommendations have already been made with the intention of standardizing PC teaching in undergraduate courses [8,9,[18][19][20][21][22][23]. In general, all of them recommend an approach to pain management and other uncomfortable symptoms, a psychosocial and spiritual approach, bioethics, communication and teamwork. It is known that teaching activities should present theoretical and practical content in an integrated manner throughout the entire undergraduate course, with a focus on basic actions [3]. EAPC was innovative and proposed teaching through competencies [8,9].
It is necessary to assess the learning of PC in medical graduation. One way of doing this would be to assess the acquisition of competencies throughout the training, so that learning could be measured and quali ed. The authors found that there was no instrument to measure the acquisition of PC competencies among medical students. In this way, an effort was devoted to building and validating the Palliative Competency Tool in this version 1, which has the characteristic of being self-applicable and requiring less than 30 minutes to be completed.
Building and validating the research instrument was a challenging experience. The inserted content should be based on consistent scienti c literature and demanded research time from the authors. However, the competencies required for basic palliative care are broad and numerous, and the degree of depth of the approach in medical training has rarely been described in the literature [24,25].
It was decided to approach the competencies in the most direct way possible, avoiding hindering students' understanding. As the student answered the degree of agreement with each item, allowing a neutral response, the authors believe that the items that may have generated doubts had a response tending to neutral. It is known that the competencies used in this research have been described for several professions, including medicine [8,9], and in this way, some competencies had greater depth than others [26]. The researchers were careful to take these characteristics into the development, as well as the experts in the content validation were oriented to this fact.
The process of constructing the tool was a quite challenging, since the intention was that a set of statements would represent a competence, without interfering with the others and, at the same time, have the discriminative power, that is, the ability to assess whether the student showed whether or not knowledge, avoiding random success (because it is very easy) or error (very di cult) and, even, that it had appropriate language for the student. The content validation step with the specialists was essential to make the instrument suitable for what was proposed and the Delphi methodology, which is one technique that consists of the evaluation of a material by specialists, proved to be adequate for the proposed objective [10][11][12][13].
In order to Delphi methodology allow better results, the experts' choice was judicious, as their opinion guided the instrument. The experts' experience was de ned by their degree and working time, as both knowledge in PC, clinical experience and teaching were fundamental. It was interesting to note that in the second stage, most items were already approved. As there was a need to follow the rules previously established, another step occurred, with the incorporation of corrections, which curiously reduced the score of isolated items, without compromising the nal approval in the third step. The inclusion of the experts' suggestions, which had proven scienti c suitability, is subject to personal perception.
The sample of students was chosen due to the feasibility and the need for a heterogeneous sample for validation.
At the time of data collection, students were at the end of the academic semester and less than 25% studied PC, resulting in four competencies with insu cient performance, demonstrating that the acquisition of competencies could still improve. The purpose of this description was to build and validate the instrument and, therefore, this discussion is not part of the material and may not re ect the course as a whole.
The study of the instrument's internal reliability and consistency was assessed using Cronbach's alpha index, and it was considered adequate between 0.6 and 0.95. [14][15][16] The correlation found in this study (= 0.7) may be better with the application in a larger population, as in the entire medical course. However, massive application would only be justi ed if we had con rmation of the validation of the questionnaire. The statistical validation of the research tool was satisfactory.
The competencies described by the EAPC [8,9] are broad and sometimes overlaping. Such overlap may have hindered the validation, since the competencies are mixed in the writing of the questionnaire items. A review will be valid; however, it is assumed that it is almost impossible to isolate each competence in just one item and would be inconsistent with the assistance reality.
It should be remembered that competency education was de ned by the process of acquiring knowledge, skills and attitudes that belong to a speci c professional area. 2 In this context, teachers are concerned with how to teach competencies, correct and evaluate this acquisition. Measuring the acquisition of competencies is challenging, but a necessary effort to improve medical education. From this instrument, the process of acquiring competencies can be evaluated and better targeted.
The PC education in Brazil are still insu cient. Out of 300 registered courses, only 14 offer them regularly [27]. In Europe, of the 53 countries recognized, 65% teach PC in medical school, and 30% in all schools as a compulsory subject [28]. However, Centeno and Rodrigues Núñez [29] reported the weakness of studies in medical education and a rmed the need to better structure education to promote medical but favourable attitudes towards end-of-life care.
In the United States, PC education has been taking place for over fourth years, with variation among medical schools and it is estimated that over 90% of students were affected by the content [24,30].
This study has limitations. In order to study the process of acquiring competencies, it is necessary to expand the sample and include all students of the course and also apply the instrument prospectively. The authors have already started the prospective study in two years and we will have more answers. The study was entirely carried out in Brazil and the choice was due to the feasibility of the research, but it can bring cultural limitations. In the future, it will be interesting to validate and adapt culturally in other contexts. The intention was to create a toll that would allow to qualify PC teaching and compare it over time. Therefore, it can be said that the data obtained can be treated as a "diagnosis" of PC teaching [5] and should not be interpreted individually or for the purpose of classi catory assessment.

Conclusion
With the data described in this study, it can be concluded that it was possible to construct and validate Palliative Competency Tool (PalliComp), an instrument of quali cation of competencies acquisition of PC. Will be appropriate to proceed with the evaluation of all students of a medical course, to evaluate the acquisition during undergraduation and possibly to expand to other medical schools.

Declarations DISCLOSURE STATEMENT
None of the authors has any con ict of interest to declare. analysis, revising and manuscript approval. All the authors have read the manuscript and agree in its submission to BMC Medical Education and consent it is done.

ETHICS DECLARATIONS
All participants were consent to participate.

Consent for publication
All data were obtained after written informed consent to answer and for data to be analysed and published.

Figure 1
Mean scores obtained by students participating in the pilot study Figure 2 Principal Component Analysis for Factor Extraction